Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>I don't think you can have SO MANY columns in table, that their on/off combinations are prohibitively numerous.</p> <p>So for each row you can extract its column combination (you can order the fields alphabetically, for example) and use its structure as a key:</p> <pre><code>// $tuple has been sorted based on keys $syndrome = implode(',', array_keys($tuple)); $values = array_values($tuple); if (isset($big_inserts[$syndrome])) array_push($big_inserts[$syndrome], $values); else $big_inserts[$syndrome] = array($values); </code></pre> <p>At the end of the loop you will find yourself with a <code>$big_inserts</code> array with a certain number of keys. Each key will map an array of sets of values, suitable for a multiple insert.</p> <p>Unless you're really, really unlucky, you'll have much fewer "multiple inserts" than the individual inserts you started with. If all inserts have the same columns, you will have only one key in big_inserts, holding all the tuples.</p> <p>Now, cycle on big_inserts, and for every key you can prepare a statement. The array of values to be sent to PDO is the concatenation of all the tuples in <code>$big_inserts[$key]</code>. </p> <pre><code>foreach($big_inserts as $fields =&gt; $lot) { $SQL = "INSERT INTO table ($fields) VALUES "; // I need a (?.?.?---) tuple $tuple = '('.implode(',', array_fill(0, count($lot[0]), '?')).')'; // How many tuples are in a lot? $SQL .= implode(',', array_fill(0, count($lot), $tuple)); $values = array(); foreach($lot as $set) $values = array_merge($values, $set); // Now $SQL has all the '?'s, and $values contains all the values. // Run the statement. } </code></pre> <p>If this is not enough, you might have to separate the statements in chunks, save them in session and execute each chunk sequentially, maybe using a separate table to simulate a "multi-roundtrip" transaction (in case connection gets lost / user closes browser / whatever with half the chunks already executed and the other half still to go. Use straight INSERTs into a table with the same structure, then when the table is ready run one single INSERT INTO ... SELECT FROM within a transaction and drop the ancillary table after commit.</p> <p>If they were simple INSERTs I'd try disabling temporarily some indexes, but you rely on them for the ON DUPLICATE KEY UPDATE, so this is a no go.</p>
    singulars
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload