Note that there are some explanatory texts on larger screens.

plurals
  1. POHow can I speed up update/replace operations in PostgreSQL?
    primarykey
    data
    text
    <p>We have a rather specific application that uses PostgreSQL 8.3 as a storage backend (using Python and psycopg2). The operations we perform to the important tables are in the majority of cases inserts or updates (rarely deletes or selects).</p> <p>For sanity reasons we have created our own <a href="http://martinfowler.com/eaaCatalog/dataMapper.html" rel="nofollow noreferrer">Data Mapper</a>-like layer that works reasonably well, but it has one big bottleneck, the update performance. Of course, I'm not expecting the update/replace scenario to be as speedy as the 'insert to an empty table' one, but it would be nice to get a bit closer.</p> <p>Note that this system is free from concurrent updates</p> <p>We always set all the fields of each rows on an update, which can be seen in the terminology where I use the word 'replace' in my tests. I've so far tried two approaches to our update problem:</p> <ol> <li><p>Create a <code>replace()</code> procedure that takes an array of rows to update:</p> <pre><code>CREATE OR REPLACE FUNCTION replace_item(data item[]) RETURNS VOID AS $$ BEGIN FOR i IN COALESCE(array_lower(data,1),0) .. COALESCE(array_upper(data,1),-1) LOOP UPDATE item SET a0=data[i].a0,a1=data[i].a1,a2=data[i].a2 WHERE key=data[i].key; END LOOP; END; $$ LANGUAGE plpgsql </code></pre></li> <li><p>Create an <code>insert_or_replace</code> rule so that everything but the occasional delete becomes multi-row inserts</p> <pre><code>CREATE RULE "insert_or_replace" AS ON INSERT TO "item" WHERE EXISTS(SELECT 1 FROM item WHERE key=NEW.key) DO INSTEAD (UPDATE item SET a0=NEW.a0,a1=NEW.a1,a2=NEW.a2 WHERE key=NEW.key); </code></pre></li> </ol> <p>These both speeds up the updates a fair bit, although the latter slows down inserts a bit: </p> <pre><code>Multi-row insert : 50000 items inserted in 1.32 seconds averaging 37807.84 items/s executemany() update : 50000 items updated in 26.67 seconds averaging 1874.57 items/s update_andres : 50000 items updated in 3.84 seconds averaging 13028.51 items/s update_merlin83 (i/d/i) : 50000 items updated in 1.29 seconds averaging 38780.46 items/s update_merlin83 (i/u) : 50000 items updated in 1.24 seconds averaging 40313.28 items/s replace_item() procedure : 50000 items replaced in 3.10 seconds averaging 16151.42 items/s Multi-row insert_or_replace: 50000 items inserted in 2.73 seconds averaging 18296.30 items/s Multi-row insert_or_replace: 50000 items replaced in 2.02 seconds averaging 24729.94 items/s </code></pre> <p>Random notes about the test run:</p> <ul> <li>All tests are run on the same computer as the database resides; connecting to localhost.</li> <li>Inserts and updates are applied to the database in batches of of 500 items, each sent in its own transaction (<strong>UPDATED</strong>).</li> <li>All update/replace tests used the same values as were already in the database.</li> <li>All data was escaped using the psycopg2 adapt() function.</li> <li>All tables are truncated and vacuumed before use (<strong>ADDED</strong>, in previous runs only truncation happened)</li> <li><p>The table looks like this:</p> <pre><code>CREATE TABLE item ( key MACADDR PRIMARY KEY, a0 VARCHAR, a1 VARCHAR, a2 VARCHAR ) </code></pre></li> </ul> <p>So, the real question is: How can I speed up update/replace operations a bit more? (I think these findings might be 'good enough', but I don't want to give up without tapping the SO crowd :)</p> <p>Also anyones hints towards a more elegant replace_item(), or evidence that my tests are completely broken would be most welcome.</p> <p>The test script is available <a href="https://github.com/gsson/postgres-perftest/blob/master/perftest.py" rel="nofollow noreferrer">here</a> if you'd like to attempt to reproduce. Remember to check it first though...it WorksForMe, but...</p> <p>You will need to edit the db.connect() line to suit your setup.</p> <p><strong>EDIT</strong></p> <p>Thanks to andres in #postgresql @ freenode I have another test with a single-query update; much like a multi-row insert (listed as update_andres above).</p> <pre><code>UPDATE item SET a0=i.a0, a1=i.a1, a2=i.a2 FROM (VALUES ('00:00:00:00:00:01', 'v0', 'v1', 'v2'), ('00:00:00:00:00:02', 'v3', 'v4', 'v5'), ... ) AS i(key, a0, a1, a2) WHERE item.key=i.key::macaddr </code></pre> <p><strong>EDIT</strong></p> <p>Thanks to merlin83 in #postgresql @ freenode and jug/jwp below I have another test with an insert-to-temp/delete/insert approach (listed as "update_merlin83 (i/d/i)" above).</p> <pre><code>INSERT INTO temp_item (key, a0, a1, a2) VALUES ( ('00:00:00:00:00:01', 'v0', 'v1', 'v2'), ('00:00:00:00:00:02', 'v3', 'v4', 'v5'), ...); DELETE FROM item USING temp_item WHERE item.key=temp_item.key; INSERT INTO item (key, a0, a1, a2) SELECT key, a0, a1, a2 FROM temp_item; </code></pre> <p>My gut feeling is that these tests are not very representative to the performance in the real-world scenario, but I think the differences are great enough to give an indication of the most promising approaches for further investigation. The perftest.py script contains all updates as well for those of you who want to check it out. It's fairly ugly though, so don't forget your goggles :)</p> <p><strong>EDIT</strong></p> <p>andres in #postgresql @ freenode pointed out that I should test with an insert-to-temp/update variant (listed as "update_merlin83 (i/u)" above).</p> <pre><code>INSERT INTO temp_item (key, a0, a1, a2) VALUES ( ('00:00:00:00:00:01', 'v0', 'v1', 'v2'), ('00:00:00:00:00:02', 'v3', 'v4', 'v5'), ...); UPDATE item SET a0=temp_item.a0, a1=temp_item.a1, a2=temp_item.a2 FROM temp_item WHERE item.key=temp_item.key </code></pre> <p><strong>EDIT</strong></p> <p>Probably final edit: I changed my script to match our load scenario better, and it seems the numbers hold even when scaling things up a bit and adding some randomness. If anyone gets very different numbers from some other scenario I'd be interested in knowing about it.</p>
    singulars
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload