Note that there are some explanatory texts on larger screens.

plurals
  1. POSQL optimization question (oracle)
    primarykey
    data
    text
    <p>Edit: Please answer one of the two answers I ask. I <em>know</em> there are other options that would be better in a different case. These other potential options (partitioning the table, running as one large delete statement w/o committing in batches, etc) are <b>NOT</b> options in my case due to things outside my control.</p> <p>I have several very large tables to delete from. All have the same foreign key that is indexed. I need to delete certain records from all tables.</p> <pre><code>table source id --primary_key import_source --used for choosing the ids to delete table t1 id --foreign key --other fields table t2 id --foreign key --different other fields </code></pre> <p>Usually when doing a delete like this, I'll put together a loop to step through all the ids:</p> <pre><code>declare my_counter integer := 0; begin for cur in ( select id from source where import_source = 'bad.txt' ) loop begin delete from source where id = cur.id; delete from t1 where id = cur.id; delete from t2 where id = cur.id; my_counter := my_counter + 1; if my_counter &gt; 500 then my_counter := 0; commit; end if; end; end loop; commit; end; </code></pre> <p>However, in some code I saw elsewhere, it was put together in separate loops, one for each delete.</p> <pre><code>declare type import_ids is table of integer index by pls_integer; my_count integer := 0; begin select id bulk collect into my_import_ids from source where import_source = 'bad.txt' for h in 1..my_import_ids.count delete from t1 where id = my_import_ids(h); --do commit check end loop; for h in 1..my_import_ids.count delete from t2 where id = my_import_ids(h); --do commit check end loop; --do commit check will be replaced with the same chunk to commit every 500 rows as the above query </code></pre> <p>So I need one of the following answered:</p> <p>1) Which of these is better?</p> <p>2) How can I find out which is better for my particular case? (IE if it depends on how many tables I have, how big they are, etc)</p> <p>Edit: </p> <p>I <em>must</em> do this in a loop due to the size of these tables. I will be deleting thousands of records from tables with hundreds of millions of records. This is happening on a system that can't afford to have the tables locked for that long.</p> <p>EDIT:</p> <p>NOTE: I am <em>required</em> to commit in batches. The amount of data is too large to do it in one batch. The rollback tables will crash our database.</p> <p>If there is a way to commit in batches other than looping, I'd be willing to hear it. Otherwise, don't bother saying that I shouldn't use a loop...</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload