Handle rate limiting on RTM connections#306
Conversation
Codecov Report
@@ Coverage Diff @@
## master #306 +/- ##
=========================================
+ Coverage 72.43% 74.33% +1.9%
=========================================
Files 10 10
Lines 370 378 +8
Branches 66 67 +1
=========================================
+ Hits 268 281 +13
+ Misses 92 88 -4
+ Partials 10 9 -1
Continue to review full report at Codecov.
|
slackclient/server.py
Outdated
|
|
||
| if reply.status_code != 200: | ||
| raise SlackConnectionError(reply=reply) | ||
| if 'depth' in kwargs: |
There was a problem hiding this comment.
i've never heard of a depth argument. is this new? does it need to be documented?
There was a problem hiding this comment.
This could probably be handled better. It's to track how many attempts have been made
There was a problem hiding this comment.
Maybe just use a loop...?
There was a problem hiding this comment.
@harlowja can you elaborate on what that might look like? 🤔
There was a problem hiding this comment.
Something like:
reply = self.api_requester.do(self.token,
connect_method, timeout=timeout,
post_data=kwargs)
retry_attempt = 0
while reply.status_code != 200:
if reply.status_code == 429:
ratelimit_timeout = int(reply.headers.get('retry-after', 120))
time.sleep(ratelimit_timeout)
retry_attempt += 1
if retry_attempt > 10:
raise SlackConnectionError("RTM connection attempt was rate limited 10 times.")
else:
reply = self.api_requester.do(self.token,
connect_method, timeout=timeout,
post_data=kwargs)
You can also use http://tenacity.readthedocs.org/ and it will do alot of this for u.
There was a problem hiding this comment.
I updated it to work more like the reconnect_count login in the block above the retry logic
There was a problem hiding this comment.
tenacity looks like it'd be a very good addition to the work I have planned for 2.x
There was a problem hiding this comment.
Ok, I am co-author of that library, so feel free to ask questions.
slackclient/server.py
Outdated
| ratelimit_timeout = int(reply.headers.get('retry-after', 120)) | ||
| logging.debug("HTTP 429: Rate limited. Retrying in %d seconds", ratelimit_timeout) | ||
| time.sleep(ratelimit_timeout) | ||
| self.rtm_connect(reconnect=reconnect, timeout=timeout, depth=depth + 1) |
There was a problem hiding this comment.
oh i see, its used to measure the number of recursions. if this is strictly for private use, then i guess it doesn't need to be documented.
|
@harlowja @rawdigits @Yasumoto can one of you folks do a quick sanity check on this logic before I merge it? |
|
still has my approval 👍 |
|
Seems ok, though I'd rather just use tenacity :-P |
* Added automatic retry for rtm.connect/rtm.start calls * Improved error handling and added debug output
* Added automatic retry for rtm.connect/rtm.start calls * Improved error handling and added debug output

Summary
This patch adds automatic handling of rate limit responses on
rtm_connectcalls tortm.startandrtm.connect.Requirements (place an
xin each[ ])