Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>With the hardware you describe you leave out the most import part: storage. Typical database are bottlenecked by the disk and then memory. Modern CPU's are so fast they're usually not the problem. If you get serious raid or SSD you can make it do some serious stuff. And the 10M row table will entirly be in memory anyway for most of the tools you describe. </p> <p>The problem you describe however will probably get hung on locking. There many users reading and writing little facts to a table and then you read a large portion of that table. The are different ways of doing that called <a href="http://en.wikipedia.org/wiki/Isolation_level" rel="nofollow">isolation levels</a>. With the loads you describe you probably want to stay clear of that altogether. </p> <p>This is a classical problem in a sport called datawarehousing where you want to run large analytical queries against an online system. You want to create a second copy of that table using log shipping for instance. Most of the databases you tagged can do this. Log shipping would create a buffer between the fast changing table and the analytical table. When you lock this analytical table the updates bunch up until you're done. and there a few people reading from this table so you have it all to yourself. tipically this will cost just a couple of percent of you databases max throughput. If you're near that already you have scaling issues. If you really need to see the latest data look into real time BI. </p> <p>Also having a second copy of that data frees you to structure it differently, in a way that is very easy to query. Central idea there is the Star Schema. </p> <p>Regards GJ</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload