Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>Use a <a href="http://en.wikipedia.org/wiki/Bloom_filter" rel="noreferrer">Bloom filter</a>: a table of simultaneous hashes. According to Wikipedia, the optimal number of hashes is <code>ln(2) * 2^32 / 2^30 ≈ 2.77 ≈ 3</code>. (Hmm, plugging in 4 gives fewer false positives but 3 is still better for this application.) This means that you have a table of 512 megabytes, or 4 gigabits, and processing each record sets three new bits in that vast sea. If all three bits were already set, it's a potential match. Record the three hash-values to a file. Otherwise, record them to another file. Note the record index along with each match.</p> <p>(If a 5% error rate is tolerable, omit the large file and use the small file as your results.)</p> <p>When finished, you should have a file of about 49M possible positive matches and a file of 975M negatives which yet may match positives. Read the former into a <code>vector&lt;pair&lt;vector&lt;uint32_t&gt;,vector&lt;uint32_t&gt; &gt; &gt;</code> (indexes in the latter <code>vector</code>, the former can be an <code>array</code>) and sort it. Put the indexes in another <code>vector&lt;uint32_t&gt;</code>; they're already sorted. Read the large file but instead of setting bits a table, find the hash values in the <code>vector</code>. (For example, use <code>equal_range</code>.) Use the list of positive-file indices to track the index of the current record in the negative file. If no match found, ignore. Otherwise, append the record's index <code>match-&gt;second.push_back(current_negative_record_index)</code>.</p> <p>Finally, iterate through the map and the vectors of record-indices. Any bucket with more than one entry is "almost" certain to contain a set of duplicates, but you've come this far, so look them up and compare them completely to be sure.</p> <p>Total synchronous disk I/O: (one pass = 1 TiB) + (96 hash bits per record = 12 GiB) + (32 index bits per positive = ~200 MiB).</p> <p><strong>Final edit</strong> (seriously): On second thought, the Bloom Filter aspect might not really be helping here. The amount of hash data is more of a limiting factor than the number of false positives. With just one hash function, the total amount of hash data would be 4 GiB and the indexes of the 124 million expected false positives would be ~500 MiB. That should globally optimize this strategy.</p> <p><strong>Clarification</strong> (got a downvote): there's a distinction between a false positive from the Bloom filter and a hash collision. A hash collision can't be resolved except by returning to the original records and comparing, which is expensive. A Bloom false positive can be resolved by returning to the original hash values and comparing them, which is what the second pass of this algorithm does. So on second thought, the one-hash filter described in the "final" edit would unduly cause disk seeks. A two-hash Bloom filter would increase the number of false positives ending up in a single bucket of the <code>match</code> map, and would bring the number of false positives back down to the tens of millions.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload