Note that there are some explanatory texts on larger screens.

plurals
  1. POpostgresql insert rules for parallel transactions
    primarykey
    data
    text
    <p>We have a postgreql connection pool used by multithreaded application, that permanently inserts some records into big table. So, lets say we have 10 database connections, executing the same function, whcih inserts the record.</p> <p>The trouble is, we have 10 records inserted as a result meanwhile it should be only 2-3 records inserted, if only transactions could see the records of each other (our function takes decision to do not insert the record according to the date of the last record found). </p> <p>We can not afford table locking for func execution period. We tried different tecniques to make the database apply our rules to new records immediately despite the fact they are created in parallel transactions, but havent succeeded yet.</p> <p>So, I would be very grateful for any help or idea!</p> <p>To be more specific, here is the code:</p> <pre><code>schm.events ( evtime TIMESTAMP, ref_id INTEGER, param INTEGER, type INTEGER); </code></pre> <p>record filter rule:</p> <pre><code>BEGIN select count(*) into nCnt from events e where e.ref_id = ref_id and e.param = param and e.type = type and e.evtime between (evtime - interval '10 seconds') and (evtime + interval '10 seconds') if nCnt = 0 then insert into schm.events values (evtime, ref_id, param, type); end if; END; </code></pre> <p><strong>UPDATE (comment length is not enough unfortunately)</strong></p> <p>I've applied to production the unique index solution. The results are pretty acceptable, but the initial target has not been achieved. The issue is, with the unique hash I can not control the interval between 2 records with sequential hash_codes.</p> <p>Here is the code:</p> <pre><code>CREATE TABLE schm.events_hash ( hash_code bigint NOT NULL ); CREATE UNIQUE INDEX ui_events_hash_hash_code ON its.events_hash USING btree (hash_code); --generate the hash codes data by partioning(splitting) evtime in 10 sec intervals: INSERT into schm.events_hash select distinct ( cast( trunc( extract(epoch from evtime) / 10 ) || cast( ref_id as TEXT) || cast( type as TEXT ) || cast( param as TEXT ) as bigint) ) from schm.events; --and then in a concurrently executed function I insert sequentially: begin INSERT into schm.events_hash values ( cast( trunc( extract(epoch from evtime) / 10 ) || cast( ref_id as TEXT) || cast( type as TEXT ) || cast( param as TEXT ) as bigint) ); insert into schm.events values (evtime, ref_id, param, type); end; </code></pre> <p>In that case, if evtime lies within hash-determined interval, only one record is being inserted. The case is, we can skip records that refer to different determined intervals, but are close to each other (less than 60 sec interval).</p> <pre><code>insert into schm.events values ( '2013-07-22 19:32:37', '123', '10', '20' ); --inserted, test ok, (trunc( extract(epoch from cast('2013-07-22 19:32:37' as timestamp)) / 10 ) = 137450715 ) insert into schm.events values ( '2013-07-22 19:32:39', '123', '10', '20' ); --filtered out, test ok, (trunc( extract(epoch from cast('2013-07-22 19:32:39' as timestamp)) / 10 ) = 137450715 ) insert into schm.events values ( '2013-07-22 19:32:41', '123', '10', '20' ); --inserted, test fail, (trunc( extract(epoch from cast('2013-07-22 19:32:41' as timestamp)) / 10 ) = 137450716 ) </code></pre> <p>I think there must be a way to modify the hash function to achieve the initial target, but havent found it yet. Maybe, there are some table constraint expressions, that are executed by the postgresql itself, out of the transaction?</p>
    singulars
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload