Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>It's called an <a href="http://en.wikipedia.org/wiki/Unrolled_linked_list" rel="noreferrer"><strong>unrolled linked list</strong></a>. There appear to be a couple of advantages, one in speed and one in space. First, if the number of elements in each node is appropriately sized (<em>e.g.</em>, at most the size of one cache line), you get noticeably better cache performance from the improved memory locality. Second, since you have O(<i>n</i>/<i>m</i>) links, where <em>n</em> is the number of elements in the unrolled linked list and <em>m</em> is the number of elements you can store in any node, you can also save an appreciable amount of space, which is particularly noticeable if each element is small. When constructing unrolled linked lists, apparently implementations will try to generally leave space in the nodes; when you try to insert in a full node, you move half the elements out. Thus, at most one node will be less than half full. And according to what I can find (I haven't done any analysis myself), if you insert things randomly, nodes tend to actually be about three-quarters full, or even fuller if operations tend to be at the end of the list.</p> <p>And as everyone else (including Wikipedia) is saying, you might want to check out <a href="http://en.wikipedia.org/wiki/Skip_list" rel="noreferrer">skip lists</a>. Skip lists are a nifty probabilistic data structure used to store ordered data with O(log <em>n</em>) expected running time for insert, delete, and find. It's implemented by a "tower" of linked lists, each layer having fewer elements the higher up it is. On the bottom, there's an ordinary linked list, having all the elements. At each successive layer, there are fewer elements, by a factor of <em>p</em> (usually 1/2 or 1/4). The way it's built is as follows. Each time an element is added to the list, it's inserted into the appropriate place in the bottom row (this uses the "find" operation, which can also be made fast). Then, with probability <em>p</em>, it's inserted into the appropriate place in the linked list "above" it, creating that list if it needs to; if it was placed in a higher list, then it will <em>again</em> appear above with probability <em>p</em>. To query something in this data structure, you always check the top lane, and see if you can find it. If the element you see is too large, you drop to the next lowest lane and start looking again. It's sort of like a binary search. Wikipedia explains it very well, and with nice diagrams. The memory usage is going to be worse, of course, and you're not going to have the improved cache performance, but it is generally going to be faster.</p> <h3>References</h3> <ul> <li>“Unrolled Linked List”, <a href="http://en.wikipedia.org/wiki/Unrolled_linked_list" rel="noreferrer">http://en.wikipedia.org/wiki/Unrolled_linked_list</a></li> <li>“Unrolled Linked Lists”, <a href="http://blogs.msdn.com/b/devdev/archive/2005/08/22/454887.aspx" rel="noreferrer">http://blogs.msdn.com/b/devdev/archive/2005/08/22/454887.aspx</a></li> <li>“Skip List”, <a href="http://en.wikipedia.org/wiki/Skip_list" rel="noreferrer">http://en.wikipedia.org/wiki/Skip_list</a></li> <li>The skip list lecture(s) from my algorithms class.</li> </ul>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload