Note that there are some explanatory texts on larger screens.

plurals
  1. POHow to deal with a very large text file?
    text
    copied!<p>I'm currently writing something that needs to handle very large text files (a few GiB at least). What's needed here (and this is fixed) is:</p> <ul> <li>CSV-based, following RFC 4180 with the exception of embedded line breaks</li> <li>random read access to lines, though mostly line by line and near the end</li> <li>appending lines at the end</li> <li>(changing lines). Obviously that calls for the rest of the file to be rewritten, it's also rare, so not particularly important at the moment</li> </ul> <p>The size of the file forbids keeping it completely in memory (which is also not desirable, since when appending the changes should be persisted as soon as possible).</p> <p>I have thought of using a memory-mapped region as a window into the file which gets moved around if a line outside its range is requested. Of course, at that stage I still have no abstraction above the byte level. To actually work with the contents I have a <code>CharsetDecoder</code> giving me a <code>CharBuffer</code>. Now the problem is, I can deal with lines of text probably just fine in the <code>CharBuffer</code>, but I also need to know the byte offset of that line within the file (to keep a cache of line indexes and offsets so I don't have to scan the file from the beginning again to find a specific line).</p> <p>Is there a way to map the offsets in a <code>CharBuffer</code> to offsets in the matching <code>ByteBuffer</code> at all? It's obviously trivial with ASCII or ISO-8859-*, less so with UTF-8 and with ISO 2022 or BOCU-1 things would get downright ugly (not that I actually expect the latter two, but UTF-8 should be the default here – and still poses problems).</p> <p>I guess I <em>could</em> just convert a portion of the <code>CharBuffer</code> to bytes again and use the length. Either it works or I get problems with diacritics in which case I could probably mandate the use of NFC or NFD to assure that the text is always unambiguously encoded.</p> <p>Still, I wonder if that is even the way to go here. Are there better options?</p> <p><strong>ETA:</strong> Some replies to common questions and suggestions here:</p> <p>This is a data storage for simulation runs, intended to be a small-ish local alternative to a full-blown database. We do have database backends as well and they are used, but for cases where they are unavailable or not applicable we do want this.</p> <p>I'm also only supporting a subset of CSV (without embedded line breaks), but that's ok for now. The problematic points here are pretty much that I cannot predict how long the lines are and thus need to create a rough map of the file.</p> <p>As for what I outlined above: The problem I was pondering was that I can easily determine the end of a line on the character level (U+000D + U+000A), but I didn't want to assume that this looks like <code>0A 0D</code> on the byte level (which already fails for UTF-16, for example, where it's either <code>0D 00 0A 00</code> or <code>00 0D 00 0A</code>). My thoughts were that I could make the character encoding changable by not hard-coding details of the encoding I currently use. But I guess I could just stick to UTF-8 and ingore everything else. Feels wrong, somehow, though.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload