Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>It sounds like your only real problem with JSON is the way you're encoding NumPy arrays (and Pandas tables). JSON is not ideal for your use case—not because it's slow at handling NumPy data, but because it's a text-based format, and you have a lot of data that's easier to encode in a non-text-based format.</p> <p>So, I'll show you a way around all of your problems with JSON below… but I would suggest using a different format.</p> <p>The two major "binary JSON" formats, <a href="http://bjson.org" rel="nofollow">BJSON</a> and <a href="http://bsonspec.org" rel="nofollow">BSON</a>, aim to provide most of the benefits of JSON (simple, safe, dynamic/schemaless, traversable, etc.), while also making it possible to embed binary data directly. (The fact that they're also binary rather than textual formats isn't really important to you in this case.) I believe the same is true of <a href="http://wiki.fasterxml.com/SmileFormatSpec" rel="nofollow">Smile</a>, but I've never used it.</p> <p>This means that, in the same way JSON makes it easy to hook in anything you can reduce to strings, floats, lists, and dicts, BJSON and BSON make it easy to hook in anything you can reduce to strings, floats, lists, dicts, <em>and byte strings</em>. So, when I show how to encode/decode NumPy to strings, the same thing works for byte strings, but without all the extra steps at the end.</p> <p>The downsides of BJSON and BSON are that they're not human-readable, and don't have nearly as widespread support.</p> <hr> <p>I have no idea how you're currently encoding your arrays, but from the timings I suspect you're using the <code>tolist</code> method or something similar. That will definitely be slow, and big. And it will even lose information if you're storing anything other than <code>f8</code> values anywhere (because the only kind of numbers JSON understands are IEEE doubles). The solution is to encode to a string.</p> <p>NumPy has a text format, which will be faster, and not lossy, but still probably slower and bigger than you want.</p> <p>It also has a binary format, which is great… but doesn't have enough information to recover your original array.</p> <p>So, let's look at what <code>pickle</code> uses, which you can see by calling the <code>__reduce__</code> method on any object: Basically, it's the type, the shape, the dtype, some flags that tell NumPy how to interpret the raw data, and then the binary-format raw data. You can actually encode the <code>__reduce__</code> data yourself—in fact, it might be worth doing so. But let's do something a bit simpler for the sake of exposition, with the understanding that it will only work on <code>ndarray</code>, and won't work on machines with different endianness (or rarer cases like sign-magnitude ints or non-IEEE floats).</p> <pre><code>def numpy_default(obj): if isinstance(obj, np.ndarray): return {'_npdata': obj.tostring(), '_npdtype': obj.dtype.name, '_npshape': obj.shape} else: return json.dumps(obj) def dumps(obj): return json.dumps(obj, default=numpy_default) def numpy_hook(obj): try: data = obj['_npdata'] except AttributeError: return obj return np.fromstring(data, obj['_npdtype']).reshape(obj['_npshape']) def loads(obj): return json.loads(obj, object_hook=numpy_hook) </code></pre> <p>The only problem is that <code>np.tostring</code> gives you <code>'bytes'</code> objects, which Python 3's <code>json</code> doesn't know how to deal with.</p> <p>This is where you can stop if you're using something like BJSON or BSON. But with JSON, you need strings.</p> <p>You can fix that easily, if hackily, by "decoding" the bytes with any encoding that maps every single-byte character, like Latin-1: change <code>obj.tostring()</code> to <code>obj.tostring().decode('latin-1')</code> and <code>data = obj['_npdata']</code> to <code>data = obj['_npdata'].encode('latin-1')</code>. That wastes a bit of space by UTF-8-encoding the fake Latin-1 strings, but that's not <em>too</em> bad.</p> <p>Unfortunately, Python will encode every non-ASCII character with a Unicode escape sequence. You can turn that off by setting <code>ensure_ascii=False</code> on the dump and <code>strict=False</code> on the the load, but it will still encode control characters, mostly to 6-byte sequences. This doubles the size of random data, and it can do much worse—e.g., an all-zero array will be 6x larger!</p> <p>There used to be a trick to get around this problem, but in 3.3, it doesn't work. The best thing you can do is to fork or monkey-patch the <code>json</code> package so it lets you pass control characters through when given <code>ensure_ascii=False</code>, which you can do like this:</p> <pre><code>json.encoder.ESCAPE = re.compile(r'"') </code></pre> <p>This is pretty hacky, but it works.</p> <hr> <p>Anyway, hopefully that's enough to get you started.</p>
    singulars
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. COThanks so much for this. I'm going to play around with this some more (and just wait just a bit to see if others chime in before accepting), but you've helped immensely. I stepped into the Pandas pickling, and it seems to basically call __reduce__ on its numpy arrays, which you've shown me how to serialize very efficiently using BSON (I was indeed using JSON and lists earlier). And there are some internals in Pandas which break down a DataFrame into homogenous numpy arrays, so I should be well on my way!
      singulars
    2. COThe more complex the types are, the more you might want to lean on `__reduce__` instead of writing everything from scratch. For example, the other values that `ndarray.__reduce__` gives you let you distinguish little- and big-endian data, or distinguish an `ndarray` from a `matrix`, etc. You just have to write manual code to map those values to strings and then back to types/constructors for the (hopefully short and static whitelist of) types you want to handle, to make sure nobody can trick you into deserializing things you didn't want to.
      singulars
    3. COThanks again. If I construct a new numpy array with shape and dtype of the old array, and try `newarray.__setstate__(oldarray.__reduce__())`, I get a `TypeError: must be sequence of length 4, not 3`. How do I go from reduce to setstate? (I've asked a separate Pandas-specific question on this too.)
      singulars
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload