Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>Best way to upload/download large blobs from Windows Azure Storage is by chunking the upload/download and make proper use of multi-threading. There're a few things you would need to consider:</p> <ol> <li><strong>Chunk size should depend on your Internet connection</strong>. For example, if you're on a really slow Internet connection then uploading large individual chunks will almost invariably result in request timeouts.</li> <li><strong>Number of concurrent threads to upload/download should depend on the number of processor cores on the machine from where your application code is running</strong>. In my experience, if you're running your application on a 8 core machine for best performance you could spawn 8 multiple threads where each thread is uploading/downloading part of the data. One may get tempted to run 100s of threads and leave the thread management to the OS but what I have observed is that in such cases most of the time requests are getting timed out.</li> <li><strong>Upload/download operation should be asynchronous</strong>. You don't want your application to block/hog resources on your computer.</li> </ol> <p>For uploading a large file, you could decide the chunk size (let's say it is 1 MB) and concurrent threads (let's say it is 8) and then read 8 MB from the file in an array with 8 elements and start uploading those 8 elements in parallel using upload block functionality. Once the 8 elements are uploaded, you repeat the logic to read next 8 MB and continue this process till the time all bytes are uploaded. After that you would call commit block list functionality to commit the blob in blob storage.</p> <p>Similarly for downloading a large file, again you could decide the chunk size and concurrent threads and then start reading parts of the blob by specifying "range" header in Get Blob functionality. Once these chunks are downloaded, you will need to rearrange based on their actual positions (as it may happen that you get 3 - 4 MB chunk downloaded before 0 - 1 MB chunk) and start writing these chunks to a file. You would need to repeat the process till the time all bytes are downloaded.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload