You might see that the Dropbox Community team have been busy working on some major updates to the Community itself! So, here is some info on what’s changed, what’s staying the same and what you can expect from the Dropbox Community overall.
Forum Discussion
Michael-jamf
2 years agoHelpful | Level 6
Better understanding of offset with appendV2 and finish
Attempting to upload large files. The regular upload method works without an issue. When I try to use appendV2 and Finish I get that the offset was incorrect and the correct offset. This post does discuss how to do offsets but want to make it clearer.
From the docs. "Offset in bytes at which data should be appended. We use this to make sure upload data isn’t lost or duplicated in the event of a network error."
If I have a file with looping "abcd" * 13*1024*1024 it would give me a 54525952 Bytes or 54.5MB. My chunk size is 4MB (4194304 Bytes).
For this file I would start the upload with the full file .uploadSessionStart(input: abcdFileURL). The initial offset should be my chunksize. My cursor would be .UploadSessionCursor(sessionId, offset) and the append would be .uploadSessionAppendV2(cursor, input: abcdFileURL). Every append after should increment the offset by the chunksize, 4MB, 8MB ... 50MB, until the last offset is equal to or less than the chunksize. This would be the final file size in the finish. Which would be .uploadSessionFinish(cursor(sessionId, fileSize), input: abcdFileSize)
Does that sound correct logically how it should work?
When I attempt this I get that the offset should be the file size for append. When I test if the file is less than 150MB and set offset to file size with the start/append/finish (so that it basically acts like the regular .upload) it works.
Output when using normal append with increasing offset.
Offset Append Value: 155189248
difference 12582912
Offset Append Value: 159383552
difference 8388608
Offset Finish Value: 167772160
File Size: 167772160
Error appending data to file upload session: [request-id 3415da207401412eaf901ef5aef245a8] API route error - {
".tag" = "incorrect_offset";
"correct_offset" = 167772160;
}
Error appending data to file upload session: sessionTaskFailed(error: Error Domain=NSURLErrorDomain Code=-1001 "The request timed out." UserInfo={_kCFStreamErrorCodeKey=-2102, NSUnderlyingError=0x6000023d7d50 {Error Domain=kCFErrorDomainCFNetwork Code=-1001 "(null)" UserInfo={_kCFStreamErrorCodeKey=-2102, _kCFStreamErrorDomainKey=4}}, _NSURLErrorFailingURLSessionTaskErrorKey=LocalUploadTask <7490F290-DA4A-49A9-AA00-B378E94A46E0>.<35>, _NSURLErrorRelatedURLSessionTaskErrorKey=(
"LocalUploadTask <7490F290-DA4A-49A9-AA00-B378E94A46E0>.<35>"
), NSLocalizedDescription=The request timed out., NSErrorFailingURLStringKey=https://api-content.dropbox.com/2/files/upload_session/append_v2, NSErrorFailingURLKey=https://api-content.dropbox.com/2/files/upload_session/append_v2, _kCFStreamErrorDomainKey=4})2023-04-20 19:57:32.228273-0600 cloud-tools[61893:1384752] Task <C348ACD5-53C3-4908-9511-697D5FB68DEF>.<44> finished with error [-1001] Error Domain=NSURLErrorDomain Code=-1001 "The request timed out." UserInfo={_kCFStreamErrorCodeKey=-2102, NSUnderlyingError=0x6000023d54d0 {Error Domain=kCFErrorDomainCFNetwork Code=-1001 "(null)" UserInfo={_kCFStreamErrorCodeKey=-210
2, _kCFStreamErrorDomainKey=4}}, _NSURLErrorFailingURLSessionTaskErrorKey=LocalUploadTask <C348ACD5-53C3-4908-9511-697D5FB68DEF>.<44>, _NSURLErrorRelatedURLSessionTaskErrorKey=(
"LocalUploadTask <C348ACD5-53C3-4908-9511-697D5FB68DEF>.<44>"
), NSLocalizedDescription=The request timed out., NSErrorFailingURLStringKey=https://api-content.dropbox.com/2/files/upload_session/append_v2, NSErrorFailingURLKey=https://api-content.dropbox.com/2/files/upload_session/append_v2, _kCFStreamErrorDomainKey=4}
Output where I set the offset to the file size if less than 150MB.
Offset Finish Value: 54525952
File Size: 54525952
Finish Good: {
"client_modified" = "2023-04-21T01:39:37Z";
"content_hash" = 9fd01c7cc2807ed423dbd11a1a06b9fd77ad15e843fcbabf4f363da527e06175;
id = "id:992ScyV19hQAAAAAAAAHbA";
"is_downloadable" = 1;
name = "testjamf.txt";
"path_display" = "/testfile.txt";
"path_lower" = "/testfile.txt";
rev = 5f9cebae50714a5e0ed31;
"server_modified" = "2023-04-21T01:39:37Z";
size = 109051904;
}
My code that produces the errors.
dbxClient.files.uploadSessionStart(input: fileURL)
.response(completionHandler: { [self] response, error in
if let result = response {
print(result)
var offset: UInt64 = chunkSize
//Append chunks to file
while (offset < fileSize!) {
if ((fileSize! - offset) <= chunkSize) {
print("Offset Value: \(offset)")
dbxClient.files.uploadSessionFinish(cursor: Files.UploadSessionCursor(sessionId: result.sessionId, offset: fileSize!), commit: Files.CommitInfo(path: dbFilePath), input: fileURL)
.response { response, error in
if let result = response {
print("Finish Good: \(result)")
} else {
print("Finish Error: \(error!)")
}
}
offset = fileSize!
} else {
print("Offset Value: \(offset)")
dbxClient.files.uploadSessionAppendV2(cursor: Files.UploadSessionCursor(sessionId: result.sessionId, offset: offset), input: fileURL)
.response {response , error in
if let response = response {
print("File appended: \(response)")
} else {
print("Error appending data to file upload session: \(error!)")
}
}
// .progress { progressData in print(progressData) }
offset += chunkSize
}
}
} else {
// the call failed
print(error!)
}
})
}
That makes sense. Thanks for clearing that up. I was thinking you provide the full file, then the offset was used to tell which part to upload. I also ran into an issue when the upload would not finish before attempting the next part so I made it recursive if the file part finished.
Anyone that find this. Here is my upload function for small and large files. It could be improved but it works.
func upload(file filePath: String, to dbxLocation: String) { if FileManager.default.fileExists(atPath: filePath) { print("File Exists") } else { return } let fileURL: URL = URL(filePath: filePath) var dbFilePath: String = "" if dbxLocation.last == "/" { dbFilePath = dbxLocation + fileURL.lastPathComponent } else { dbFilePath = dbxLocation + "/" + fileURL.lastPathComponent } print(dbFilePath) let fileSize = try? FileManager.default.attributesOfItem(atPath: filePath)[.size] as? UInt64 let chunkSize: UInt64 = 4 * 1024 * 1024 // Check to see if file is smaller than chunksize if fileSize! < chunkSize { dbxClient.files.upload(path: dbFilePath, input: fileURL).response(completionHandler: { response, error in if let response = response { print("File uploaded: \(response)") } else { print("Error upload session: \(error!)") } }) .progress { progressData in print(progressData) } print("small file") } else { let data: [Data] = try! self.split(file: fileURL, into: chunkSize) //start the upload session var offset: UInt64 = UInt64(data[0].count) dbxClient.files.uploadSessionStart(input: data[0]) .response(completionHandler: { [self] response, error in if let result = response { print(result) append(data: data, part: 1, sessionId: result.sessionId, offset: offset, chunkSize: chunkSize, fileSize: fileSize!, dbFilePath: dbFilePath) } else { // the call failed print(error!) } }) } } func append(data: [Data], part: Int, sessionId: String, offset: UInt64, chunkSize: UInt64, fileSize: UInt64, dbFilePath: String) { let chunk = data[part] if ((fileSize - offset) <= chunkSize) { dbxClient.files.uploadSessionFinish(cursor: Files.UploadSessionCursor(sessionId: sessionId, offset: offset), commit: Files.CommitInfo(path: dbFilePath), input: chunk) .response { response, error in if let result = response { print("File Uploaded: \(result)") } else { print(fileSize) print("Finish Error: \(error!)") } } } else { dbxClient.files.uploadSessionAppendV2(cursor: Files.UploadSessionCursor(sessionId: sessionId, offset: offset), input: chunk) .response {response , error in if let response = response { self.append(data: data, part: part + 1, sessionId: sessionId, offset: offset + UInt64(data[part].count), chunkSize: chunkSize, fileSize: fileSize, dbFilePath: dbFilePath) } else { print("Error appending data \(part) to file upload session: \(error!)") } } } } func split(file fileURL: URL, into chunkSize: UInt64) throws -> [Data] { let data = try Data(contentsOf: fileURL) let chunks = stride(from: 0, to: data.count, by: Int(chunkSize)).map { data.subdata(in: $0 ..< Swift.min($0 + Int(chunkSize), data.count)) } return chunks }
- Greg-DBDropbox Staff
The process you described isn't quite correct. Upload sessions work by having you upload just a portion of the file in each request (that is, per uploadSessionStart, uploadSessionAppendV2, or uploadSessionFinish call). You would not send the full file data in each request. For each request, you read off and upload just the next piece (the size being the "chunk size") of the file. You would use one upload session per file to upload, and the number of calls you make per upload session depends on the size of the file and the chunk size you use.
The chunk size is how much data to send per request, and the offset is how much data has been successfully sent to the server for that upload session so far. The chunk size doesn't need to be the same for each request, but for the sake of simplicity it is the same across requests in most implementations. The offset would increase over the life of the upload session as data is successfully uploaded.
In short, you would upload the first portion of the file's data with uploadSessionStart, the next portion(s) with uploadSessionAppendV2 if/as needed, and then the final portion with uploadSessionFinish.
Here's a minimal example implementation in Python that may be illustrative and more easy to read:
import os import dropbox ACCESS_TOKEN = "ACCESS_TOKEN_HERE" local_file_path = "LOCAL_PATH_TO_FILE_HERE" dbx = dropbox.Dropbox(ACCESS_TOKEN) f = open(local_file_path, "rb") file_size = os.path.getsize(local_file_path) dest_path = "REMOTE_PATH_IN_DROPBOX_FOR_UPLOADED_FILE" CHUNK_SIZE = 4 * 1024 * 1024 upload_session_start_result = dbx.files_upload_session_start(f.read(CHUNK_SIZE)) cursor = dropbox.files.UploadSessionCursor(session_id=upload_session_start_result.session_id, offset=f.tell()) commit = dropbox.files.CommitInfo(path=dest_path) while f.tell() <= file_size: if ((file_size - f.tell()) <= CHUNK_SIZE): print(dbx.files_upload_session_finish(f.read(CHUNK_SIZE), cursor, commit)) break else: dbx.files_upload_session_append_v2(f.read(CHUNK_SIZE), cursor) cursor.offset = f.tell() f.close()
- Michael-jamfHelpful | Level 6
That makes sense. Thanks for clearing that up. I was thinking you provide the full file, then the offset was used to tell which part to upload. I also ran into an issue when the upload would not finish before attempting the next part so I made it recursive if the file part finished.
Anyone that find this. Here is my upload function for small and large files. It could be improved but it works.
func upload(file filePath: String, to dbxLocation: String) { if FileManager.default.fileExists(atPath: filePath) { print("File Exists") } else { return } let fileURL: URL = URL(filePath: filePath) var dbFilePath: String = "" if dbxLocation.last == "/" { dbFilePath = dbxLocation + fileURL.lastPathComponent } else { dbFilePath = dbxLocation + "/" + fileURL.lastPathComponent } print(dbFilePath) let fileSize = try? FileManager.default.attributesOfItem(atPath: filePath)[.size] as? UInt64 let chunkSize: UInt64 = 4 * 1024 * 1024 // Check to see if file is smaller than chunksize if fileSize! < chunkSize { dbxClient.files.upload(path: dbFilePath, input: fileURL).response(completionHandler: { response, error in if let response = response { print("File uploaded: \(response)") } else { print("Error upload session: \(error!)") } }) .progress { progressData in print(progressData) } print("small file") } else { let data: [Data] = try! self.split(file: fileURL, into: chunkSize) //start the upload session var offset: UInt64 = UInt64(data[0].count) dbxClient.files.uploadSessionStart(input: data[0]) .response(completionHandler: { [self] response, error in if let result = response { print(result) append(data: data, part: 1, sessionId: result.sessionId, offset: offset, chunkSize: chunkSize, fileSize: fileSize!, dbFilePath: dbFilePath) } else { // the call failed print(error!) } }) } } func append(data: [Data], part: Int, sessionId: String, offset: UInt64, chunkSize: UInt64, fileSize: UInt64, dbFilePath: String) { let chunk = data[part] if ((fileSize - offset) <= chunkSize) { dbxClient.files.uploadSessionFinish(cursor: Files.UploadSessionCursor(sessionId: sessionId, offset: offset), commit: Files.CommitInfo(path: dbFilePath), input: chunk) .response { response, error in if let result = response { print("File Uploaded: \(result)") } else { print(fileSize) print("Finish Error: \(error!)") } } } else { dbxClient.files.uploadSessionAppendV2(cursor: Files.UploadSessionCursor(sessionId: sessionId, offset: offset), input: chunk) .response {response , error in if let response = response { self.append(data: data, part: part + 1, sessionId: sessionId, offset: offset + UInt64(data[part].count), chunkSize: chunkSize, fileSize: fileSize, dbFilePath: dbFilePath) } else { print("Error appending data \(part) to file upload session: \(error!)") } } } } func split(file fileURL: URL, into chunkSize: UInt64) throws -> [Data] { let data = try Data(contentsOf: fileURL) let chunks = stride(from: 0, to: data.count, by: Int(chunkSize)).map { data.subdata(in: $0 ..< Swift.min($0 + Int(chunkSize), data.count)) } return chunks }
About Dropbox API Support & Feedback
Find help with the Dropbox API from other developers.
5,882 PostsLatest Activity: 3 years agoIf you need more help you can view your support options (expected response time for an email or ticket is 24 hours), or contact us on X or Facebook.
For more info on available support options for your Dropbox plan, see this article.
If you found the answer to your question in this Community thread, please 'like' the post to say thanks and to let us know it was useful!