Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>The normal way to do this on a low-latency data warehouse application is to have a partitioned table with a leading partition containing something that can be updated quickly (i.e. without having to recalculate aggregates on the fly) but with trailing partitions backfilled with the aggregates. In other words, the leading partition can use a different storage scheme to the trailing partitions. </p> <p>Most commercial and some open-source RDBMS platforms (e.g. PostgreSQL) can support partitioned tables, which can be used to do this type of thing one way or another. How you populate the database from your logs is left as an exercise for the reader.</p> <p>Basically, the structure of this type of system goes like:</p> <ul> <li><p>You have a table partitioned on some sort of date or date-time value, partitioned by hour, day or whatever grain seems appropriate. The log entries get appended to this table.</p></li> <li><p>As the time window slides off a partition, a periodic job indexes or summarises it and converts it into its 'frozen' state. For example, a job on Oracle may create bitmap indexes on that partition or update a materialized view to include summary data for that partition.</p></li> <li><p>Later on, you can drop old data, summarize it or merge partitions together.</p></li> <li><p>As time goes on, the periodic job back fills behind the leading edge partition. The historical data is converted to a format that lends itself to performant statistical queries while the front edge partition is kept easy to update quickly. As this partition doesn't have so much data, querying across the whole data set is relatively fast.</p></li> </ul> <p>The exact nature of this process varies between DBMS platforms. </p> <p>For example, table partitioning on SQL Server is not all that good, but this can be done with Analysis Services (an OLAP server that Microsoft bundles with SQL Server). This is done by configuring the leading partition as pure ROLAP (the OLAP server simply issues a query against the underlying database) and then rebuilding the trailing partitions as MOLAP (the OLAP server constructs its own specialised data structures including persistent summaries known as 'aggregations'). Analysis services can do this completely transparently to the user. It can rebuild a partition in the background while the old ROLAP one is still visible to the user. Once the build is finished it swaps in the partition; the cube is available the whole time with no interruption of service to the user.</p> <p>Oracle allows partition structures to be updated independently, so indexes can be constructed, or a partition built on a materialized view. With Query re-write, the query optimiser in Oracle can work out that aggregate figures calculated from a base fact table can be obtained from a materialized view. The query will read the aggregate figures from the materialized view where partitions are available and from the leading edge partition where they are not.</p> <p>PostgreSQL may be able to do something similar, but I've never looked into implementing this type of system on it.</p> <p>If you can live with periodic outages, something similar can be done explicitly by doing the summarisation and setting up a view over the leading and trailing data. This allows this type of analysis to be done on a system that doesn't support partitioning transparently. However, the system will have a transient outage as the view is rebuilt, so you could not really do this during business hours - the most often would be overnight.</p> <p><strong>Edit:</strong> Depending on the format of the log files or what logging options are available to you, there are various ways to load the data into the system. Some options are:</p> <ul> <li><p>Write a script using your favourite programming language that reads the data, parses out the relevant bits and inserts it into the database. This could run fairly often but you have to have some way of keeping track of where you are in the file. Be careful of locking, especially on Windows. Default file locking semantics on Unix/Linux allow you to do this (this is how <code>tail -f</code> works) but the default behaviour on Windows is different; both systems would have to be written to play nicely with each other.</p></li> <li><p>On a unix-oid system you could write your logs to a pipe and have a process similar to the one above reading from the pipe. This would have the lowest latency of all, but failures in the reader could block your application.</p></li> <li><p>Write a logging interface for your application that directly populates the database, rather than writing out log files.</p></li> <li><p>Use the bulk load API for the database (most if not all have this type of API available) and load the logging data in batches. Write a similar program to the first option, but use the bulk-load API. This but would use less resources than populating it line-by-line, but has more overhead to set up the bulk loads. It would be suitable a less frequent load (perhaps hourly or daily) and would place less strain on the system overall.</p></li> </ul> <p>In most of these scenarios, keeping track of where you've been becomes a problem. Polling the file to spot changes might be infeasibly expensive, so you may need to set the logger up so that it works in a way that plays nicely with your log reader.</p> <ul> <li><p>One option would be to change the logger so it starts writing to a different file every period (say every few minutes). Have your log reader start periodically and load new files that it hasn't already processed. Read the old files. For this to work, the naming scheme for the files should be based on the time so the reader knows which file to pick up. Dealing with files still in use by the application is more fiddly (you will then need to keep track of how much has been read), so you would want to read files only up to the last period.</p></li> <li><p>Another option is to move the file then read it. This works best on filesystems that behave like Unix ones, but should work on NTFS. You move the file, then read it at leasure. However, it requires the logger to open the file in create/append mode, write to it and then close it - not keep it open and locked. This is definitely Unix behaviour - the move operation has to be atomic. On Windows you may really have to stand over the logger to make this work.</p></li> </ul>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload