Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <blockquote> <p>If I know the number of keys and exactly what are those keys, is there any way in python to make a dict (or a hashtable) work more efficiently? I vaguely remember that if you know the keys, you can design the hash function smartly (perfect hash?) and allocate the space beforehand.</p> </blockquote> <p>Python doesn't expose a pre-sizing option to speed-up the "growth phase" of a dictionary, nor does it provide any direct controls over "placement" in the dictionary.</p> <p>That said, if the keys are always known in advance, you can store them in a <a href="http://docs.python.org/2.7/library/functions.html#func-set" rel="noreferrer"><em>set</em></a> and build your dictionaries from the set using <a href="http://docs.python.org/2.7/library/stdtypes.html#dict.fromkeys" rel="noreferrer"><em>dict.fromkeys()</em></a>. That classmethod is <a href="http://hg.python.org/cpython/file/09811ecd5df1/Objects/dictobject.c#l1378" rel="noreferrer">optimized to pre-size the dictionary based on the set size</a> and it can populate the dictionary without any new calls to __hash__():</p> <pre><code>&gt;&gt;&gt; keys = {'red', 'green', 'blue', 'yellow', 'orange', 'pink', 'black'} &gt;&gt;&gt; d = dict.fromkeys(keys) # dict is pre-sized to 32 empty slots </code></pre> <p>If reducing collisions is your goal, you can run experiments on the insertion order in the dictionary to minimize pile-ups. (Take a look at <a href="http://www.drdobbs.com/cpp/tables-within-tables-c-and-brents-techni/199900579" rel="noreferrer">Brent's variation on Algorithm D</a> in Knuth's TAOCP to get an idea of how this is done). </p> <p>By instrumenting a pure Python model for dictionaries (such as <a href="http://code.activestate.com/recipes/578375-proof-of-concept-for-a-more-space-efficient-faster/" rel="noreferrer">this one</a>), it is possible to count the weighted-average number of probes for an alternative insertion order. For example, inserting <code>dict.fromkeys([11100, 22200, 44400, 33300])</code> averages 1.75 probes per lookup. That beats the 2.25 average probes per lookup for <code>dict.fromkeys([33300, 22200, 11100, 44400])</code>.</p> <p>Another "trick" is to increase spareness in a fully populated dictionary by fooling it into <a href="http://hg.python.org/cpython/file/09811ecd5df1/Objects/dictobject.c#l1548" rel="noreferrer">increasing its size without adding new key</a>s:</p> <pre><code> d = dict.fromkeys(['red', 'green', 'blue', 'yellow', 'orange']) d.update(dict(d)) # This makes room for additional keys # and makes the set collision-free. </code></pre> <p>Lastly, you can introduce your own custom __hash__() for your keys with the goal of eliminating all collisions (perhaps using a perfect hash generator such as <a href="http://www.gnu.org/software/gperf/" rel="noreferrer"><em>gperf</em></a>).</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload