Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>Ok, there are a few fundamental issues.</p> <ol> <li><p>I\O - Inserting into a table while reading from another table will most likely cause disk contention if the tables are not on separate disks. Put the opposing tables on to physically different spindles.</p></li> <li><p>Transaction Log - You need to ensure that your transaction log is on it's own disk, or work in smaller transactions (a few thousand rows at a time) or use BCP\Bulk Insert which is not logged.</p></li> <li><p>Clustered indexes - If you are inserting all of these rows into a target table, and it's clustered index(the physical order data is written to disk) is not writing sequentially, the disk IO requirements go through the roof because of page splits and re-allocation. An easy fix can be to create a clustered index on the recipient table that is a sequential seeded key. This will generally ensure that you get sequential writes into the table and almost always at the end.</p></li> <li><p>File Expansion - Ensure that you have SQL set to expand it's files at a decent rate, like 10% or so. Otherwise it will have to constantly resize it's files and zero out the disk. There are ways to prevent it from having to zero out the disk as well, like enabling the Bulk File Operation permission in your group policies for the Sql Service user.</p></li> </ol> <p>Quite frankly, other than that and a few of the others suggestions, it is highly unlikely you will make an insert with millions of rows in a transaction be truly fast. If you did this via Bulk Insert, it would be drastically faster, though it might not be what you need from an application perspective.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload