Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>I think there are several questions buried in this topic:</p> <ul> <li>How do you implement buildHeap so it runs in <em>O(n)</em> time?</li> <li>How do you show that buildHeap runs in <em>O(n)</em> time when implemented correctly?</li> <li>Why doesn't that same logic work to make heap sort run in <em>O(n)</em> time rather than <em>O(n log n)</em>?</li> </ul> <p>Often, answers to these questions focus on the difference between <code>siftUp</code> and <code>siftDown</code>. Making the correct choice between <code>siftUp</code> and <code>siftDown</code> is critical to get <em>O(n)</em> performance for <code>buildHeap</code>, but does nothing to help one understand the difference between <code>buildHeap</code> and <code>heapSort</code> in general. Indeed, proper implementations of both <code>buildHeap</code> and <code>heapSort</code> will <strong>only</strong> use <code>siftDown</code>. The <code>siftUp</code> operation is only needed to perform inserts into an existing heap, so it would be used to implement a priority queue using a binary heap, for example.</p> <p>I've written this to describe how a max heap works. This is the type of heap typically used for heap sort or for a priority queue where higher values indicate higher priority. A min heap is also useful; for example, when retrieving items with integer keys in ascending order or strings in alphabetical order. The principles are exactly the same; simply switch the sort order.</p> <p>The <strong>heap property</strong> specifies that each node in a binary heap must be at least as large as both of its children. In particular, this implies that the largest item in the heap is at the root. Sifting down and sifting up are essentially the same operation in opposite directions: move an offending node until it satisfies the heap property:</p> <ul> <li><code>siftDown</code> swaps a node that is too small with its largest child (thereby moving it down) until it is at least as large as both nodes below it. </li> <li><code>siftUp</code> swaps a node that is too large with its parent (thereby moving it up) until it is no larger than the node above it. </li> </ul> <p>The number of operations required for <code>siftDown</code> and <code>siftUp</code> is proportional to the distance the node may have to move. For <code>siftDown</code>, it is the distance from the bottom of the tree, so <code>siftDown</code> is expensive for nodes at the top of the tree. With <code>siftUp</code>, the work is proportional to the distance from the top of the tree, so <code>siftUp</code> is expensive for nodes at the bottom of the tree. Although both operations are <em>O(log n)</em> in the worst case, in a heap, only one node is at the top whereas half the nodes lie in the bottom layer. So <strong>it shouldn't be too surprising that if we have to apply an operation to every node, we would prefer <code>siftDown</code> over <code>siftUp</code>.</strong></p> <p>The <code>buildHeap</code> function takes an array of unsorted items and moves them until they all satisfy the heap property, thereby producing a valid heap. There are two approaches one might take for <code>buildHeap</code> using the <code>siftUp</code> and <code>siftDown</code> operations we've described. </p> <ol> <li><p>Start at the top of the heap (the beginning of the array) and call <code>siftUp</code> on each item. At each step, the previously sifted items (the items before the current item in the array) form a valid heap, and sifting the next item up places it into a valid position in the heap. After sifting up each node, all items satisfy the heap property. </p></li> <li><p>Or, go in the opposite direction: start at the end of the array and move backwards towards the front. At each iteration, you sift an item down until it is in the correct location.</p></li> </ol> <p>Both of these solutions will produce a valid heap. The question is: which implementation for <code>buildHeap</code> is more efficient? Unsurprisingly, it is the second operation that uses <code>siftDown</code>. </p> <p>Let <em>h = log n</em> represent the height of the heap. The work required for the <code>siftDown</code> approach is given by the sum</p> <pre><code>(0 * n/2) + (1 * n/4) + (2 * n/8) + ... + (h * 1). </code></pre> <p>Each term in the sum has the maximum distance a node at the given height will have to move (zero for the bottom layer, h for the root) multiplied by the number of nodes at that height. In contrast, the sum for calling <code>siftUp</code> on each node is</p> <pre><code>(h * n/2) + ((h-1) * n/4) + ((h-2)*n/8) + ... + (0 * 1). </code></pre> <p>It should be clear that the second sum is larger. The first term alone is <em>hn/2 = 1/2 n log n</em>, so this approach has complexity at best <em>O(n log n)</em>. But how do we prove that the sum for the <code>siftDown</code> approach is indeed <em>O(n)</em>? One method (there are other analyses that also work) is to turn the finite sum into an infinite series and then use Taylor series. We may ignore the first term, which is zero:</p> <p><a href="https://i.stack.imgur.com/959f6.png" rel="noreferrer"><img src="https://i.stack.imgur.com/959f6.png" alt="Taylor series for buildHeap complexity"></a></p> <p>If you aren't sure why each of those steps works, here is a justification for the process in words:</p> <ul> <li>The terms are all positive, so the finite sum must be smaller than the infinite sum.</li> <li>The series is equal to a power series evaluated at <em>x=1/2</em>.</li> <li>That power series is equal to (a constant times) the derivative of the Taylor series for <em>f(x)=1/(1-x)</em>.</li> <li><em>x=1/2</em> is within the interval of convergence of that Taylor series.</li> <li>Therefore, we can replace the Taylor series with <em>1/(1-x)</em>, differentiate, and evaluate to find the value of the infinite series.</li> </ul> <p>Since the infinite sum is exactly <em>n</em>, we conclude that the finite sum is no larger, and is therefore, <em>O(n)</em>.</p> <p>The next question is: if it is possible to run <code>buildHeap</code> in linear time, why does heap sort require <em>O(n log n)</em> time? Well, heap sort consists of two stages. First, we call <code>buildHeap</code> on the array, which requires <em>O(n)</em> time if implemented optimally. The next stage is to repeatedly delete the largest item in the heap and put it at the end of the array. Because we delete an item from the heap, there is always an open spot just after the end of the heap where we can store the item. So heap sort achieves a sorted order by successively removing the next largest item and putting it into the array starting at the last position and moving towards the front. It is the complexity of this last part that dominates in heap sort. The loop looks likes this:</p> <pre><code>for (i = n - 1; i &gt; 0; i--) { arr[i] = deleteMax(); } </code></pre> <p>Clearly, the loop runs O(n) times (<em>n - 1</em> to be precise, the last item is already in place). The complexity of <code>deleteMax</code> for a heap is <em>O(log n)</em>. It is typically implemented by removing the root (the largest item left in the heap) and replacing it with the last item in the heap, which is a leaf, and therefore one of the smallest items. This new root will almost certainly violate the heap property, so you have to call <code>siftDown</code> until you move it back into an acceptable position. This also has the effect of moving the next largest item up to the root. Notice that, in contrast to <code>buildHeap</code> where for most of the nodes we are calling <code>siftDown</code> from the bottom of the tree, we are now calling <code>siftDown</code> from the top of the tree on each iteration! <em>Although the tree is shrinking, it doesn't shrink fast enough</em>: The height of the tree stays constant until you have removed the first half of the nodes (when you clear out the bottom layer completely). Then for the next quarter, the height is <em>h - 1</em>. So the total work for this second stage is</p> <pre><code>h*n/2 + (h-1)*n/4 + ... + 0 * 1. </code></pre> <p>Notice the switch: now the zero work case corresponds to a single node and the <em>h</em> work case corresponds to half the nodes. This sum is <em>O(n log n)</em> just like the inefficient version of <code>buildHeap</code> that is implemented using siftUp. But in this case, we have no choice since we are trying to sort and we require the next largest item be removed next.</p> <p>In summary, the work for heap sort is the sum of the two stages: <em>O(n) time for buildHeap and <strong>O(n log n) to remove each node in order</strong>, so the complexity is O(n log n)</em>. You can prove (using some ideas from information theory) that for a comparison-based sort, <em>O(n log n)</em> is the best you could hope for anyway, so there's no reason to be disappointed by this or expect heap sort to achieve the O(n) time bound that <code>buildHeap</code> does.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload