Note that there are some explanatory texts on larger screens.

plurals
  1. POUpload to S3 from HTTPWebResponse.GetResponseStream() in c#
    primarykey
    data
    text
    <p>I am trying to upload from an HTTP stream directly to S3, without storing in memory or as a file first. I am already doing this with Rackspace Cloud Files as HTTP to HTTP, however the AWS authentication is beyond me so am trying to use the SDK.</p> <p>The problem is the upload stream is failing with this exception:</p> <p><code>"This stream does not support seek operations."</code></p> <p>I've tried with <code>PutObject</code> and <code>TransferUtility.Upload</code>, both fail with the same thing.</p> <p>Is there any way to stream into S3 as the stream comes in, rather than buffering the whole thing to a <code>MemoryStream</code> or <code>FileStream</code>?</p> <p><em>or</em> is there any good examples of doing the authentication into S3 request using HTTPWebRequest, so I can duplicate what I do with Cloud Files?</p> <p><strong>Edit:</strong> <em>or</em> is there a helper function in the AWSSDK for generating the authorization header?</p> <p><strong>CODE:</strong></p> <p>This is the failing S3 part (both methods included for completeness):</p> <pre><code>string uri = RSConnection.StorageUrl + "/" + container + "/" + file.SelectSingleNode("name").InnerText; var req = (HttpWebRequest)WebRequest.Create(uri); req.Headers.Add("X-Auth-Token", RSConnection.AuthToken); req.Method = "GET"; using (var resp = req.GetResponse() as HttpWebResponse) { using (Stream stream = resp.GetResponseStream()) { Amazon.S3.Transfer.TransferUtility trans = new Amazon.S3.Transfer.TransferUtility(S3Client); trans.Upload(stream, config.Element("root").Element("S3BackupBucket").Value, container + file.SelectSingleNode("name").InnerText); //Use EITHER the above OR the below PutObjectRequest putReq = new PutObjectRequest(); putReq.WithBucketName(config.Element("root").Element("S3BackupBucket").Value); putReq.WithKey(container + file.SelectSingleNode("name").InnerText); putReq.WithInputStream(Amazon.S3.Util.AmazonS3Util.MakeStreamSeekable(stream)); putReq.WithMetaData("content-length", file.SelectSingleNode("bytes").InnerText); using (S3Response putResp = S3Client.PutObject(putReq)) { } } } </code></pre> <p>And this is how I do it successfully from S3 to Cloud Files:</p> <pre><code>using (GetObjectResponse getResponse = S3Client.GetObject(new GetObjectRequest().WithBucketName(bucket.BucketName).WithKey(file.Key))) { using (Stream s = getResponse.ResponseStream) { //We can stream right from s3 to CF, no need to store in memory or filesystem. var req = (HttpWebRequest)WebRequest.Create(uri); req.Headers.Add("X-Auth-Token", RSConnection.AuthToken); req.Method = "PUT"; req.AllowWriteStreamBuffering = false; if (req.ContentLength == -1L) req.SendChunked = true; using (Stream stream = req.GetRequestStream()) { byte[] data = new byte[32768]; int bytesRead = 0; while ((bytesRead = s.Read(data, 0, data.Length)) &gt; 0) { stream.Write(data, 0, bytesRead); } stream.Flush(); stream.Close(); } req.GetResponse().Close(); } } </code></pre>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload