Note that there are some explanatory texts on larger screens.

plurals
  1. POReporting upload progress from node.js
    primarykey
    data
    text
    <p>I'm writing a small node.js application that receives a multipart POST from an HTML form and pipes the incoming data to Amazon S3. The <a href="https://github.com/felixge/node-formidable" rel="nofollow noreferrer">formidable</a> module provides the <a href="https://github.com/felixge/node-formidable/blob/master/lib/multipart_parser.js" rel="nofollow noreferrer">multipart parsing</a>, exposing each part as a node <a href="http://nodejs.org/api/stream.html" rel="nofollow noreferrer">Stream</a>. The <a href="https://github.com/LearnBoost/knox" rel="nofollow noreferrer">knox</a> module handles the PUT to s3.</p> <pre><code>var form = new formidable.IncomingForm() , s3 = knox.createClient(conf); form.onPart = function(part) { var put = s3.putStream(part, filename, headers, handleResponse); put.on('progress', handleProgress); }; form.parse(req); </code></pre> <p>I'm reporting the upload progress to the browser client via <a href="http://socket.io/" rel="nofollow noreferrer">socket.io</a>, but am having difficulty getting these numbers to reflect the real progress of the node to s3 upload.</p> <p>When the browser to node upload happens near instantaneously, as it does when the node process is running on the local network, the progress indicator reaches 100% immediately. If the file is large, i.e. 300MB, the progress indicator rises slowly, but still faster than our upstream bandwidth would allow. After hitting 100% progress, the client then hangs, presumably waiting for the s3 upload to finish.</p> <p>I know <code>putStream</code> uses Node's <a href="http://nodejs.org/api/stream.html#stream_stream_pipe_destination_options" rel="nofollow noreferrer">stream.pipe</a> method internally, but I don't understand the detail of how this really works. My assumption is that node gobbles up the incoming data as fast as it can, throwing it into memory. If the write stream can take the data fast enough, little data is kept in memory at once, since it can be written and discarded. If the write stream is slow though, as it is here, we presumably have to keep all that incoming data in memory until it can be written. Since we're listening for <code>data</code> events on the read stream in order to emit progress, we end up reporting the upload as going faster than it really is.</p> <p>Is my understanding of this problem anywhere close to the mark? How might I go about fixing it? Do I need to get down and dirty with <code>write</code>, <code>drain</code> and <code>pause</code>?</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload