Note that there are some explanatory texts on larger screens.

plurals
  1. POMongoDB slow writes causes socket time out exception
    text
    copied!<p>I am having performance issues with MongoDB.</p> <p>Running on:</p> <ul> <li>MongoDB 2.0.1</li> <li>Windows 2008 R2</li> <li>12 GB RAM</li> <li>2 TB HDD (5400 rpm)</li> </ul> <p>I've written a daemon which removes and inserts records async. Each hour most of the collections are cleared and they'll get new inserted data (10-12 million deletes and 10-12 million inserts). The daemon uses ~60-80 of the CPU while inserting the data (due calculating 1+ million knapsack problems). When I fire up the daemon it can do it's job about 1-2 mins till it crashes due a socket time out (writing data to the MongoDB server).</p> <p>When I look in the logs I see it takes about 30 seconds to remove data in the collection. It seems it has something to do with the CPU load and memory usage.., because when I run the daemon on a different PC everything goes fine.</p> <p>Is there any optimization possible or I am just bound to using a separate PC for running the daemon (or pick another document store)?</p> <p><b>UPDATE 11/13/2011 18:44 GMT+1</b></p> <p>Still having problems.. I've made some modifications to my daemon. I've decreased the concurrent number of writes. However the daemon still crashes when the memory is getting full (11.8GB of 12GB) and receives more load (loading data into the frontend). It crashes due a long insert/remove of MongoDB(30 seconds). <strong>The crash of the daemon is because of MongoDB is responding slow (socket time out exception).</strong> Ofcourse there should be try/catch statements to catch such exceptions, but it should not happen in the first place. I'm looking for a solution to solve this issue instead of working around it.</p> <ul> <li>Total storage size is: 8,1 GB </li> <li>Index size is: 2,1 GB</li> </ul> <p>I guess the problem lies in that the working set + indexes are too large to store in memory and MongoDB needs to access the HDD (which is slow 5400 rpm).. However why would this be a problem? Aren't there other strategies to store the collections (e.g. in seperate files instead of large chunks of 2GB). If an Relational database can read/write data in an acceptable amount of time from the disk, why can't MongoDB?</p> <p><b>UPDATE 11/15/2011 00:04 GMT+1</b></p> <p>Log file to illustrate the issue:</p> <pre><code>00:02:46 [conn3] insert bargains.auction-history-eu-bloodhoof-horde 421ms 00:02:47 [conn6] insert bargains.auction-history-eu-blackhand-horde 1357ms 00:02:48 [conn3] insert bargains.auction-history-eu-bloodhoof-alliance 577ms 00:02:48 [conn6] insert bargains.auction-history-eu-blackhand-alliance 499ms <b>00:02:49 [conn4] remove bargains.crafts-eu-agamaggan-horde 34881ms</b> 00:02:49 [conn5] remove bargains.crafts-eu-aggramar-horde 3135ms 00:02:49 [conn5] insert bargains.crafts-eu-aggramar-horde 234ms <b>00:02:50 [conn2] remove bargains.auctions-eu-aerie-peak-horde 36223ms</b> 00:02:52 [conn5] remove bargains.auctions-eu-aegwynn-horde 1700ms </code></pre> <p><b>UPDATE 11/18/2011 10:41 GMT+1</b> After posting this issue in the mongodb usergroup we found out that "drop" wasn't issued. Drop is much faster then a full remove of all records.</p> <p>I am using official mongodb-csharp-driver. I issued this command <code>collection.Drop();</code>. However It didn't work, so for the time being I used this:</p> <pre><code> public void Clear() { if (collection.Exists()) { var command = new CommandDocument { { "drop", collectionName } }; collection.Database.RunCommand(command); } } </code></pre> <p>The daemon is quite stable now, yet I have to find out why the <code>collection.Drop()</code> method doesn't work as it supposed to, since the driver uses the native drop command aswell.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload