I'm fortunate enough to have an 80/20 Mbit/s Internet connection with minimal visible contention and a router that barely breaks a sweat at full line speed. BC4's uploads to Dropbox does not make good use of the available upload bandwidth because it appears to be using chunked upload with fixed 4 MByte chunks. The small chunk size means there are excessive protocol overheads, not least as a complete TLS negotiation is required per chunk. If I use another implementation of the Dropbox Core API that uses fixed 32 MByte chunks, I*can upload large files to Dropbox at 40-45 seconds per full chunk, which works out at approximately 50*MBytes per minute.
The quick fix solution would be for BC4 to allow the use to select alternative fixed chunk sizes. In the longer term, maybe the developers could experiment with an adaptive chunk size algorithm that seeks the sweet spot for chunk size based on the time taken to upload each chunk and the time taken with per chunk overheads.
If the upload code is revisited, it might be worth adding a feature to throttle uploads to prevent saturation of the available upload bandwidth. This isn't a huge issue for me, as I typically have plenty of upload bandwidth remaining and I make use of my router's sophisticated traffic prioritisation capabilities, but it may well be an issue for others.
Despite the apparent use of Dropbox's chunked upload API, I'm unable to upload files larger than around 150 MBytes to Dropbox using BC4. I get a mixture of "Connection lost (error code 100354)" and "Connection timeout (error code 96270)" errors.
I don't know how BC4 generates its chunks for upload, but it might be significant that the Windows 7 x64 machine I'm using for testing has an almost full C: drive, typically with a few hundred MBytes spare. I realise this is horrid from a performance and reliability standpoint. Unfortunately, the machine is cursed with a 128 GB SSD, which is too small for my needs but is not cost-effective to replace considering the limited time remaining before the entire machine is replaced.
The quick fix solution would be for BC4 to allow the use to select alternative fixed chunk sizes. In the longer term, maybe the developers could experiment with an adaptive chunk size algorithm that seeks the sweet spot for chunk size based on the time taken to upload each chunk and the time taken with per chunk overheads.
If the upload code is revisited, it might be worth adding a feature to throttle uploads to prevent saturation of the available upload bandwidth. This isn't a huge issue for me, as I typically have plenty of upload bandwidth remaining and I make use of my router's sophisticated traffic prioritisation capabilities, but it may well be an issue for others.
Despite the apparent use of Dropbox's chunked upload API, I'm unable to upload files larger than around 150 MBytes to Dropbox using BC4. I get a mixture of "Connection lost (error code 100354)" and "Connection timeout (error code 96270)" errors.
I don't know how BC4 generates its chunks for upload, but it might be significant that the Windows 7 x64 machine I'm using for testing has an almost full C: drive, typically with a few hundred MBytes spare. I realise this is horrid from a performance and reliability standpoint. Unfortunately, the machine is cursed with a 128 GB SSD, which is too small for my needs but is not cost-effective to replace considering the limited time remaining before the entire machine is replaced.
Comment