Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>What about storing it in the cloud storage and reading it incrementally, as you can access it line by line (in Python anyway) so it wont' consume all resources. </p> <p><a href="https://developers.google.com/appengine/docs/python/googlecloudstorageclient/" rel="nofollow">https://developers.google.com/appengine/docs/python/googlecloudstorageclient/</a></p> <p><a href="https://developers.google.com/storage/" rel="nofollow">https://developers.google.com/storage/</a></p> <blockquote> <p>The GCS client library lets your application read files from and write files to buckets in Google Cloud Storage (GCS). This library supports reading and writing large amounts of data to GCS, with internal error handling and retries, so you don't have to write your own code to do this. Moreover, it provides read buffering with prefetch so your app can be more efficient.</p> <p>The GCS client library provides the following functionality:</p> <p>An open method that returns a file-like buffer on which you can invoke standard Python file operations for reading and writing. A listbucket method for listing the contents of a GCS bucket. A stat method for obtaining metadata about a specific file. A delete method for deleting files from GCS.</p> </blockquote> <p>I've processed some very large CSV files in exactly this way - read as much as I need to, process, then read some more. </p> <pre><code>def read_file(self, filename): self.response.write('Truncated file content:\n') gcs_file = gcs.open(filename) self.response.write(gcs_file.readline()) gcs_file.seek(-1024, os.SEEK_END) self.response.write(gcs_file.read()) gcs_file.close() </code></pre> <p>Incremental reading with standard python!</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload