You might see that the Dropbox Community team have been busy working on some major updates to the Community itself! So, here is some info on what’s changed, what’s staying the same and what you can expect from the Dropbox Community overall.

Forum Discussion

Andrew N.8's avatar
Andrew N.8
Helpful | Level 5
8 years ago

Upload/Download files via API using AWS Lambda - Rate Limiting

Hi all,

 

I am designing a system to copy files to/from Dropbox via the HTTP API using AWS Lambda functions. 

 

We may be copying large quantities of files (up to 10,000 in a batch) using a single API key. As you may know, AWS Lambda can fire thousands of Lambda functions simultaneously. This could mean that the API gets hit with thousands of requests in a very short period of time (a window of a few seconds) I think I will probably start running into 429 responses (rate limiting) if we send this volume of requests to in such a short window. 

 

Does anyone know what the best practice would be for this use case? Is there a maximum number of simultaneous connections allowed by the Dropbox API?

  • Thanks. It sounds like I need to do some trial-and-error. Marking this solved - but I may be back :)

  • Greg-DB's avatar
    Greg-DB
    Icon for Dropbox Staff rankDropbox Staff
    The Dropbox API does have a rate limiting system, but we don't have any specific numbers documented.

    Also note that not all 429s and 503s indicate rate limiting, but in any case that you get a 429 or 503 the best practice is to retry the request, respecting the Retry-After header if given in the response, or using an exponential back-off, if not.

    There's also a guide here that may be helpful:

    https://www.dropbox.com/developers/reference/data-ingress-guide
    • Andrew N.8's avatar
      Andrew N.8
      Helpful | Level 5

      Hey Greg,

       

      Thanks for this info. 

       

      I understand the exponential backoff and the Retry-After header, no problem.

       

      I'm trying to understand what happens in a hypothetical situation in which a lot of Lambda functions fire in parallel. Let's say we have 200 functions all making API calls within a few seconds, and (hypothetically) the 143rd Lambda triggers rate limiting. The 144th through 200th functions will be already attempting to make API calls before the 143rd function could circulate a message that the other Lambda functions that they should not attempt API calls until after the time specified in the Retry-After header.

       

      Will we be penalized in any way for the remaining functions (144-200) attempting to call Dropbox endpoints before the time specified in the Retry-After header in the response to 143?

      • Greg-DB's avatar
        Greg-DB
        Icon for Dropbox Staff rankDropbox Staff
        There's no accumulating penalty. The additional calls will just receive the 429 response with the Retry-After value. The Retry-After value is generally not more than a few minutes. (I believe the Retry-After value may effectively get reset with each rate limited call within the rate limiting window, but if they're all getting sent at the same time anyway like you describe, this shouldn't make any meaningful difference.)

        By the way, for the upload case, make sure you read about "lock contention" in the data ingress guide I linked earlier. That has important information about a 429 error you can hit in that case, which isn't explicit rate limiting, and offers a solution.
  • McLickin's avatar
    McLickin
    New member | Level 2
    Do you have a source to your system design that you would like to share?

About Dropbox API Support & Feedback

Node avatar for Dropbox API Support & Feedback
Find help with the Dropbox API from other developers.5,919 PostsLatest Activity: 9 hours ago
334 Following

If you need more help you can view your support options (expected response time for an email or ticket is 24 hours), or contact us on X or Facebook.

For more info on available support options for your Dropbox plan, see this article.

If you found the answer to your question in this Community thread, please 'like' the post to say thanks and to let us know it was useful!