Skip to content

Handle rate limiting on RTM connections#306

Merged
Roach merged 10 commits intomasterfrom
jayalane-ratelimiting_pr
Mar 26, 2018
Merged

Handle rate limiting on RTM connections#306
Roach merged 10 commits intomasterfrom
jayalane-ratelimiting_pr

Conversation

@Roach
Copy link
Contributor

@Roach Roach commented Mar 24, 2018

Summary

This patch adds automatic handling of rate limit responses on rtm_connect calls to rtm.start and rtm.connect.

Requirements (place an x in each [ ])

@codecov
Copy link

codecov bot commented Mar 24, 2018

Codecov Report

Merging #306 into master will increase coverage by 1.9%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff            @@
##           master     #306     +/-   ##
=========================================
+ Coverage   72.43%   74.33%   +1.9%     
=========================================
  Files          10       10             
  Lines         370      378      +8     
  Branches       66       67      +1     
=========================================
+ Hits          268      281     +13     
+ Misses         92       88      -4     
+ Partials       10        9      -1
Impacted Files Coverage Δ
slackclient/server.py 85.79% <100%> (+3.65%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ca3c493...d88b59b. Read the comment docs.

@Roach Roach changed the title Handle rate limiting to RTM connections Handle rate limiting on RTM connections Mar 24, 2018

if reply.status_code != 200:
raise SlackConnectionError(reply=reply)
if 'depth' in kwargs:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i've never heard of a depth argument. is this new? does it need to be documented?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could probably be handled better. It's to track how many attempts have been made

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe just use a loop...?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@harlowja can you elaborate on what that might look like? 🤔

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something like:

reply = self.api_requester.do(self.token,
                              connect_method, timeout=timeout,
                              post_data=kwargs)
retry_attempt = 0
while reply.status_code != 200:
    if reply.status_code == 429:
        ratelimit_timeout = int(reply.headers.get('retry-after', 120))
        time.sleep(ratelimit_timeout)
    retry_attempt += 1
    if retry_attempt > 10:
        raise SlackConnectionError("RTM connection attempt was rate limited 10 times.")
    else:
        reply = self.api_requester.do(self.token,
                              connect_method, timeout=timeout,
                              post_data=kwargs)

You can also use http://tenacity.readthedocs.org/ and it will do alot of this for u.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated it to work more like the reconnect_count login in the block above the retry logic

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tenacity looks like it'd be a very good addition to the work I have planned for 2.x

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I am co-author of that library, so feel free to ask questions.

ratelimit_timeout = int(reply.headers.get('retry-after', 120))
logging.debug("HTTP 429: Rate limited. Retrying in %d seconds", ratelimit_timeout)
time.sleep(ratelimit_timeout)
self.rtm_connect(reconnect=reconnect, timeout=timeout, depth=depth + 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh i see, its used to measure the number of recursions. if this is strictly for private use, then i guess it doesn't need to be documented.

@Roach
Copy link
Contributor Author

Roach commented Mar 26, 2018

@harlowja @rawdigits @Yasumoto can one of you folks do a quick sanity check on this logic before I merge it?

@aoberoi
Copy link
Contributor

aoberoi commented Mar 26, 2018

still has my approval 👍

@harlowja
Copy link
Contributor

Seems ok, though I'd rather just use tenacity :-P

@Roach
Copy link
Contributor Author

Roach commented Mar 26, 2018

@harlowja soon

@Roach Roach merged commit 51117e3 into master Mar 26, 2018
@Roach Roach deleted the jayalane-ratelimiting_pr branch March 29, 2018 22:00
lavabyrd pushed a commit that referenced this pull request Apr 25, 2019
* Added automatic retry for rtm.connect/rtm.start calls
* Improved error handling and added debug output
c-goosen pushed a commit to c-goosen/python-slackclient that referenced this pull request Jun 18, 2019
* Added automatic retry for rtm.connect/rtm.start calls
* Improved error handling and added debug output
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants