We're making changes to the Community, so you may have received some notifications - thanks for your patience and welcome back. Learn more here.
Forum Discussion
Keith B.7
9 years agoHelpful | Level 7
-uploadSessionStartUrl: example?
Apologies for the barrage of questions - it's only because I'm updating a large class to use API 2. Is there an example of how to use -uploadSessionStartUrl: anywhere? (A Google search brings up ...
Keith B.7
Helpful | Level 7
I'm trying to get this to work right now, but one thing I cannot figure out is how to calculate the offset. I can't quite work it out from the Swift code - it seems that there's a set chunk size there of 5MB, but I don't know why that is. And I'm using the -uploadUrl: method rather than the -uploadData: methods that are used in both of the examples in the link.
In API 1, the -restClient:uploadedFileChunk:newOffset:fromFile:expires: delegate methods provided the next offset required. If the file's file size >= this offset, you would commit the upload, otherwise you would upload the next chunk from this offset. But in V2, I can see no way of getting this offset. After you have started an upload session, DBFILESUploadSessionStartResult does not return an offset. And the response callback for -uploadSessionAppendV2Url: has a nil object as its first parameter, so there's no way to get the offset from there, either.
From the example, it seems that the code is just assuming that Dropbox's chunking upload methods upload in chunks of 5MB and so working things out from there, but that doesn't seem to be documented, and I can't see anywhere that it's setting the chunk size. All it's telling the Dropbox frameworks is, "Here's a URL, here's the session ID, here's where we're currently at (the offset), now upload the next chunk." So it seems there's no way of knowing how much has been uploaded (and therefore what the next offset is) when you receive the callback for either -uploadSessionStartUrl: or -uploadSessionAppendV2:.
I must be missing something obvious here - how do I get the offset?
EDIT: Ah, okay, re-reading the sample code, I finally figured out that it *is* setting the chunk size itself by using data.subdataWithRange and uploading subsections of the data. So, I *think* I can see how that would work for -uploadSessionStartData: (and possibly move over to using that method). But how does it work with -uploadSessionStartUrl:?
Thanks!
Keith
Greg-DB
9 years agoDropbox Staff
The 5 MB chunk size is somewhat arbitrary, and up to the app. The best size would depend on various factors. When the connection is reliable, larger chunk sizes mean better overall performance. If there are connection errors, smaller chunk sizes means you have to retransmit fewer bytes when requests do fail. You may want to try different sizes to see what works best for your app.
Anyway, for both versions of the method, the input you provide is expected to be just the data you want to upload for that single request. So for the URL version, the URL should point to just one chunk size's worth of data.
- Keith B.79 years agoHelpful | Level 7
Great, thanks. It might be an idea to put a note about this in the documentation, as it's not at all clear that the URL should point to a single chunk of data, especially as it's done differently from API 1 (I'm not even sure how you would have a URL point to a chunk of data).
I've moved over to using NSFileHandle, though, and that works great. And I really like how you can now determine the chunk size, unlike in API 1.
Thanks and all the best,
Keith - Keith B.79 years agoHelpful | Level 7
Actually, a quick question on the NSFileHandle example regarding this part:
func uploadNextChunk() { data = fileHandle!.readDataOfLength(chunkSize) let size = data!.length print("Have \(size) bytes to upload.") if size < chunkSize { print("Last chunk!") Dropbox.authorizedClient!.files.uploadSessionFinish(
Why is this line:
if size < chunkSize {
less than rather than less than or equals:
if size <= chunkSize {
?
In the (highly) unlikely event that the size of the data to be uploaded is an exact multiple of the chunk size, couldn't this result in a situation where you end up sending empty data for the final (finish) chunk? Or is that not a problem?
EDIT: And a second question: in API 1, you had to be careful about calling -uploadFile:toPath:withParentRev:fromUploadId: to commit a chunked upload, ensuring that it was called synchronously. If it happened to get called simultaneously for two different files because they had both finished uploading at the same time, you would receive a "failed to grab file locks" error. How do I avoid this problem in API 2?
EDIT 2: It seems I would need to use -uploadSessionFinishBatch: here. But that insists that you call "close" on the last append or session start, but I don't know which will be the last since I limit to five uploads at a time, starting a new upload when another one ends. Also, there seems to be no way to get the meta-data for each item committed from -uploadSessionFinishBatch:...
EDIT 3: Ah, it seems I just misunderstood this part of the documentation for -uploadSessionFinishBatch:
`close` in `DBFILESUploadSessionStartArg` or `close` in /// `DBFILESUploadSessionAppendArg` needs to be true for the last `uploadSessionStart` or `uploadSessionAppendV2` call.
I had interpreted that as meaning it had to be closed for the last file, but of course it means for the last chunk.
So, as I understand it, if I have multiple uploads, I should:
1. Use chunked uploads (even if they can be uploaded in one go).
2. Upload all of the data before calling -finish, so using appendV2 even for the last chunk of data. And, on that last chunk of data, set `close` to YES.
3. Call -uploadFinishBatch: when I detect that all of the files have finished uploading.
Is this all correct?
Then, based on this:
Q3: The documentation says that the API supports up to 1,000 files being uploaded at a time. Am I likely to run into errors uploading so many files? With API 1, I found that you were likely to run into errors (file lock errors etc) if you tried to upload more than 5 files at a time.
Q4. What does -uploadSessionFinishBatchCheck: do? I'm not at all sure from the documentation.
Q5. How do you get the meta-data for files that have been uploaded in batch once the upload is committed? Is this what -uploadSessionFinishBatchCheck: is for?)
Q6. Is there no equivalent batch method for downloading files? Or is that not needed?
Q7. What happens with -uploadSessionFinishBatch: when there's an error for only one file (e.g. one file cannot overwrite an existing one)?
Apologies for another wall of text!
Thanks,
Keith
- Greg-DB9 years agoDropbox Staff
Yes, in that case you would end up sending an empty chunk, but that won't make any difference in the resulting file.
And yes, using the finish batch method is the best way to commit multiple files at the same time without causing lock contention with yourself.
And that's correct, the strategy you described is reccomended for when you need to upload multiple files at once.
Q3. The finish batch endpoint uses a single lock for all of the files to be committed for that call, so you shouldn't run in to (self-)lock contention there. Let us know if you do run in to any issues though of course.
Q4. The finish batch check method is for checking the status of the finish batch job.
Q5. The finish batch method will return a job ID for use with the finish batch check method. The finish batch check method will return the metadata when the job is complete.
Q6. No, unfortunately we don't have a batch download. We'll consider it a feature request though.
Q7. Once the job is done, the batch check method returns a list of results, one per file, each of which will either be success (with the file metadata) or a failure (with the reason for the failure).
- Keith B.79 years agoHelpful | Level 7
Great, thank you. Okay, so from your answer, I've tracked through the various classes again and mostly got there. So:
1. Once -uploadSessionBatchFinish: completes, its response callback gives you aDBASYNCLaunchEmptyResult object.
2. You can use the .asyncJobId of that empty result object to invoke -uploadSessionBatchFinishCheck:
3. -uploadSessionBatchFinishCheck:'s response callback gives you a DBFILESUploadSessionFinishBatchJobStatus object.
4. You can use the .isComplete property to check the job is finished (or .isInProgress to check it's still in progres).
5. *IF* it is complete, you can call .compete on the job status object to get a UploadSessionFinishBatchResult object.
6. The batch result object has an .entries array. You can then iterate through that to get a DBFILESUploadSessionFinishBatchResultEntry for each upload.
7. For each result entry you check .isSuccess or .isFailure. If .isSuccess, the .success property gives you your meta-data object; if .isFailure, the .failure object gives you the upload error for that file.
Phew! That took a bit of following the chain in the headers.
So, now, only a couple of things elude me:
1. Is there any notification for when the job status changes from .isInProgress to .isComplete? If I call -uploadSessionBatchFinishCheck: from inside the -uploadSessionBatchFinish: response call back, the status is always .isInProgress. So how do I get access to it when it is complete? Am I just supposed to poll every, say, 0.2 seconds or so to check the status? Or is there some way of polling or receiving notifications built into the frameworks?
2. What if the aDBASYNCLaunchEmptyResult object returns .isComplete? In that case, there's no way of getting the .asyncJobId, so I can't call -uploadSessionBatchFinishCheck:, so how do I get the meta-data results in that case?
Many, many thanks for all your patient and informative answers to my many, many questions, by the way!
Thanks and all the best,
Keith
About Dropbox API Support & Feedback
Find help with the Dropbox API from other developers.
5,875 PostsLatest Activity: 2 months agoIf you need more help you can view your support options (expected response time for an email or ticket is 24 hours), or contact us on X or Facebook.
For more info on available support options for your Dropbox plan, see this article.
If you found the answer to your question in this Community thread, please 'like' the post to say thanks and to let us know it was useful!