Note that there are some explanatory texts on larger screens.

plurals
  1. POOptimal data structure for time and source dependent log data for fast browsing?
    primarykey
    data
    text
    <p>I've got field bus data that gets sent in packets and contains a datum (e.g. a float) from a source.</p> <p>=> I get timestamps with a source ID and a value.</p> <p>Now I want to create a little program (actually a logging deamon in C++ that offers a query interface over HTTP for displaying the data in a plot diagram) where the user can select a few of the sources and the interesting time range and then gets the data drawn. This deamon will run under a Linux-based embedded system.</p> <p>So my question is: what is the most efficient (query performance and memory consumption) data storage scheme for that data?</p> <hr> <p><strong>Addendum #1:</strong></p> <p>Although I think the algorithm question is very interesting stand alone I will provide a few informations about the problem that caused this question:</p> <ul> <li>Data rate is typically 3 packets / second (bursts up to 30/s are usual)</li> <li>Interesting data might be as old as a month (the more the better; the algorithm might use an hierarchy that allows ultra fast lookup for the last day, fast lookup for the last week and acceptable lookup for older data)</li> <li>the IDs are (at the moment) 32 bits wide.</li> <li>There are roghly 1000 IDs used - but it's not known in advance which and the user might use an additional ID any time</li> <li>The values stored will have different data types (boolean, integer, float - even 14 byte width strings are possible)</li> </ul> <p>Doing a bit of math:</p> <ul> <li>Assuming a 32 bit timestamp + 32 bit ID + 32 bit values on average will create a datum to store of 12 bytes</li> <li>That'll be for a month 12*3*60*60*24*30 = about 100 MB of data to filter trough (in real-time with an 500 MHz Geode CPU)</li> <li>Showing the plot for the last day will filter out 1/30th of the data - that'll leave 3 MB to filter through. </li> <li>That 3 MB will be reduced to 1/1000th (= 3 KB) by showing only the relevant ID.</li> </ul> <hr> <p><strong>Addendum #2:</strong></p> <p>This problem asks basically how do I transfer a 2D dataset (time and ID are the dimensions) into memory (and from there serialize it to a file). And the constraint is that both dimensions will be filtered.</p> <p>The suggested time sorted array is an obvious solution to handle the time-dimension. (To increase query performance an tree based index might be used. A binary search itself isn't so easy as each entry might have a different size - but the index tree covers that nicely and basically has the same underlying idea).</p> <p>Going that route (first one dimension (time), then the other one) will result in a poor performance (I fear) for the ID filtering, as I have to use a brute force lookup.</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload