You might see that the Dropbox Community team have been busy working on some major updates to the Community itself! So, here is some info on what’s changed, what’s staying the same and what you can expect from the Dropbox Community overall.
Forum Discussion
Olaf B.2
9 years agoNew member | Level 2
downloading a large file using python v2 API
Dear Dropboxers,
would it be possible to see an example for large file download, equivalent to https://www.dropboxforum.com/hc/en-us/community/posts/205544836-python-upload-big-file-example for the upload?
Thanks.
It is already implemented in the Java SDK.
(It is also implemented in the API v1 Python client, but I can't recommend using that as it's deprecated.)
If you wanted to implement it manually, or modify the Python SDK, here's a sample of what it would look like in curl for reference:
curl -X POST https://content.dropboxapi.com/2/files/download \
--header "Authorization: Bearer ACCESS_TOKEN" \
--header "Dropbox-API-Arg: {\"path\": \"/test.txt\"}" \
--header "Range:bytes=0-2"That would download just the first 3 bytes of the file at /test.txt.
- Greg-DBDropbox Staff
It is already implemented in the Java SDK.
(It is also implemented in the API v1 Python client, but I can't recommend using that as it's deprecated.)
If you wanted to implement it manually, or modify the Python SDK, here's a sample of what it would look like in curl for reference:
curl -X POST https://content.dropboxapi.com/2/files/download \
--header "Authorization: Bearer ACCESS_TOKEN" \
--header "Dropbox-API-Arg: {\"path\": \"/test.txt\"}" \
--header "Range:bytes=0-2"That would download just the first 3 bytes of the file at /test.txt.
- MarceloCExplorer | Level 4
Hello!
This post is a bit old, but was the incremental download implemented in the v2 Python SDK?
I'm having troubles to manage large accounts (many files and large files) so I'm developing some tools using Python. Downloading large files (>20Gb) using the desktop application takes ages and have no control, and even using the navigator there are many interruptions or abortions, so the idea is to have total control about exactly what is being downloaded, and to be able to restart from the last sucessful chunk as needed.
I'm already being able to upload large files using files_upload_session_start/append/finish methods.
Thanks and regards.
- Greg-DBDropbox Staff
Hi Matt, it sounds like your request may actually be slightly different than what was being discussed on this thread. We were talking about downloading files in distinct chunks (similar to the chunked upload), but it sounds like you want to be able to stream the download as desired, like you currently do with the get_file method.
I believe the files_download method does already work the same way as that though. It returns a requests.models.Response object on which you can call iter_content to iterate over the content, streaming it off the connection.
That would look something like:
metadata, res = dbx.files_download(path)
for data in res.iter_content(10):
print(data)Hope this helps!
- Greg-DBDropbox Staff
Hi Olaf, for downloading, you generally just need a simple call to the files_download method. There's a sample here:
Are you running into issues downloading large files?
- Olaf B.2New member | Level 2
Hi Gregory,
thanks. I am working with very large files, up to several tens of GB, and it would be nice to be able to parcel them in chunks, as we do for the upload. The motivation is the same: to be able to monitor progress and retry in the case of failure. I cannot find the the right tools for that in the API. Are they not provided, or am I looking in the wrong place?
- Greg-DBDropbox Staff
I see, thanks for the additional context! The API itself does actually support Range Retrieval Requests which can be used to download files in pieces, but this functionality unfortunately isn't currently exposed in the API v2 Python SDK. We'll consider this a feature request for that though.
- Olaf B.2New member | Level 2
Is it available in some other programing language?
- Olaf B.2New member | Level 2
OK, thank you, this helps.
- MattFahrnerHelpful | Level 5
I have the same problem - want to download a very large file in a streaming format in the v2 API and used to use v1 "get_file()" functionality. The idea is to use it for a "tar" restore from backup, and pulling it all into memory ala "files_download()" would get ugly fast.
Any hope that this will make it back into the exposed Python API?
Thanks!
- MattFahrnerHelpful | Level 5
Excellent! I will give that a try - thanks!
About Dropbox API Support & Feedback
Find help with the Dropbox API from other developers.
5,910 PostsLatest Activity: 2 days agoIf you need more help you can view your support options (expected response time for an email or ticket is 24 hours), or contact us on X or Facebook.
For more info on available support options for your Dropbox plan, see this article.
If you found the answer to your question in this Community thread, please 'like' the post to say thanks and to let us know it was useful!