Heaps
The case study for heaps begins with a need for sorted collections. This section illustrates implementing a simple priority queue ADT with a heap. The priority queue is exactly what it sounds like; a queue with additional logic that determines some form of ordering, or priority. You can implement a priority queue easily with an unsorted list, but finding (and removing) the min (or the max) value (priority) takes O(n)
time. You can alternatively use a sorted list, which results in O(1)
operations to find the min or max element, but insertion still takes O(n)
time, meaning that priority queue construction takes O(n^2)
time. Enter the heap structure. Properly designed heaps allow you to create a priority queue from a sequential structure in O(n)
time, and add or remove elements in O(log(n))
time, making it more efficient overall than using lists when implementing basic priority queues.
The Heap ADT
Heaps are a special kind of tree with two specific additional properties:
1) The Heap-Order Property:
Each parent is either >=
to its children (max heap), or <=
to its children (min heap). This importantly only applies to parent-child relationships. Additionally requiring sibling order is a property of binary search trees (BSTs).
The heap-order property says that heaps are ordered, but this does not necessarily imply total ordering, or a sorted invariant. Consider the following min (binary) heap:
3 / \ 5 9 / \ / \ 8 12 11 14
Notice that each node is larger than its parent. This heap-order property only holds for parent-child relationships. For example, notice that first level after the head includes members 5 and 9, but node 5 includes a child that its smaller (8) than its own sibling (9). That is to say that the ordering doesn’t necessarily extend to each level, just to the localized parent-child relationship. In this way the heap is said to be (partially) ordered, but not sorted.
2) Complete Tree Property:
Each node contains the maximum number of children possible, and remaining nodes at level h
reside in the left-most position.
The complete tree property directly affects the heap’s efficiency characteristics. Consider the following variation of the same set of information as the example in the heap-order property section:
3 / 5 / \ 8 9 / \ 11 12 / 14
Technically speaking, this example represents the same information as the original example. However, the number of checks you need to perform to find a value increases with the height of the heap. Additionally, without guaranteeing that you’d populate the left-most child first, you effectively double the number of child checks for nodes with only one child. Indeed if you extend this example out to its logical conclusion you’d end up with a sorted list! That would spoil all the asymptotic advantages of the heap.
Heap Types
The previous section does not explicitly state the number of children for each heap node. Binary heaps represent the most common implementation of a heap. Binary heaps are heaps with only two children per parent. The association between heap and binary heap is so pervasive that most people define heaps as binary heaps. Ternary heaps (3-heap), quaternary heaps(4-heaps), etc. also exist, and are referred to more generally as d-ary heaps.
Heaps represent a maximally efficient implementataion of the basic priority queue ADT. Indeed the relationship is so strong that the two terms are often used interchanteably, regardless of the actual implementation of a priority queue. The heap name itself comes from a visualization of the hierarchical structure. The root, e.g. the highest priority item, is always at the “top of the heap”.
The original example diagram uses a single value to represent each node, but often a heap uses some associative map for each of its nodes; one value to determine the data, and one value to determine the priority. Consider the following evolution of the original diagram:
(3, Peter) / \ (5, Sleve) (9, Bobson) / \ / \ (8, Dichael) (12, Damiel) (11, Cork) (14, Flank)
This represents the same partial ordering (priority), but now each node contains an arbitrary value, in this case a name.
Implementations
This section covers how to make heap go brrr.
Sifting
The key to building and maintaining a heap is sifting operations. Sifting operations restore heap properties after insertions or removals in the backing structure. These operations are sometimes referred to as bubbling, percolating, or swimming, depending on the direction and context. This text uses the term sifting because it applies to both upward and downward movement through the heap, unlike bubbling, which is typically associated only with upward motion.
There are two primary sifting operations, sift up and sift down.
Sift Up
Sift up operations are used for node insertions. When you add a node to a heap you do so at the last spot in the backing array. From there you sift the node upwards in the tree to an appropriate spot that satisfies the heap order property. The sift up operation takes the heap itself and the index for the node you’re sifting up, which changes as its swapped in the tree.
fn sift_up(&mut self, mut index: usize) { // Only sift up if the element is not the root (index 0) while index > 0 { // Calculate the parent's index let parent_index = (index - 1) / 2;
// Perform comparisons to determine the swap target // NOTE: Switch < to > for max heap if self.arena[index] < self.arena[parent_index] { self.arena.swap(index, parent_index); // Move up to the parent's old position to continue sifting index = parent_index; } else { break; // No-op, heap is heapy 🍃🧘 } } }
Sift Down
The most fundamental use for sift down operations occur when you remove an element from the heap. Rembmer that when you remove the “next” value from a heap (min value for a min heap, max value for a max heap) you do so from the heap’s root. When you remove the root node you initially swap in the last item in the heap which maintains the complete tree property. From there you need to sift the new root node down to a spot that maintains the heap order property. The sift down operation takes the heap itself and the index for the node you’re sifting down. Clever readers might ask Why not hard-code the starting index to zero and omit it as an argument if you’re always starting at the root? The short answer is that the heapify operation (covered in the next section) needs to perform this sift operation for more than just the root node, meaning that removals aren’t the only place where sift down operations are used.
fn sift_down(&mut self, mut index: usize) { loop { // Calculate the index's children let left_child = 2 * index + 1; let right_child = 2 * index + 2; let mut target = index;
// Perform comparisons to determine the swap target // NOTE: Switch < to > for a max heap if left_child < self.arena.len() && self.arena[left_child] < self.arena[target] { target = left_child; } if right_child < self.arena.len() && self.arena[right_child] < self.arena[target] { target = right_child; }
if target != index { self.arena.swap(index, target); index = target; } else { break; // No-op, heap is heapy 🍃🧘 } } }
Heap Construction & Floyd’s Algorithm
There are two common to build a heap from a sequential structure. The simplest way to build a heap is to use n
number of push()
operations to create a heap from a list of entries. Assuming each push()
operation runs in O(log(n))
time, this simple approach results in O(n * log(n))
heap construction. This might be the only way to construct a heap from a linked list, but if you build a heap from an indexed structure, you can simply “heapify” it “from the bottom” in place in O(n)
time.
The “heapify” or “bottom-up” construction, otherwise known as Floyd’s algorithm, works by applying a down_sift()
operation to all non-leaf nodes in the backing structure, starting from the lowest non-leaf level (the level just above the leaf nodes) and progressing to the heap’s root. This works by assuming the input collection of non-null elements already represents a complete binary tree. You only need to apply the process to non-leaf nodes because each single element already satisfies both of the heap properties, so you can skip them entirely. To find the first non-leaf node in a complete binary tree H
you start at index H[(n/2) - 1]
(for a 0-indexed list/heap). From there, simply work your way through all indexes, sifting down each node, until you get to the root.
In Rust, this might look something like the following:
pub fn from_slice(list: &[T]) -> Self where T: Clone { let mut heap = BinHeap::new(); heap.arena = Vec::from(list); let start = (list.len() / 2).saturating_sub(1); for i in (0..=start).rev() { heap.sift_down(i); } heap }
pub fn from_vec(list: Vec<T>) -> Self { let size = list.len(); let mut heap = BinHeap { arena: list, size }; let start = (size / 2).saturating_sub(1); for i in (0..=start).rev() { heap.sift_down(i); } heap }
The entire “heapify” process requires roughly n/2 number of sift_down()
operations, which each take at worst O(log(n))
time. On the surface, this may appear like an n/2 * log(n) = O(n * log(n))
operation, but in reality, most of the operations are single swaps. Indeed only a few complex log(n)
operations are performed near the top of the heap, so it can be said that the average expected temporal cost of a heapify operation is O(n)
. The true operational (temporal) cost is Σ (k=1 to log(n)) ( (n / (2^k)) * k )
, if you want to get pedantic.
Adaptable Priority Queues
Keen readers will by now have noticed that this site contains both this Heaps page and a separate Priority Queues page under the Composite Structures module. This page asserts that heaps represent the maximally efficient implementation of a priority queue, so why are there two pages? The reason is that in the “real world” (calm down, Neo) you may want to change an arbitrary node’s priority or even remove it all together. So far, this section has dealt with simple heap ordering by implementing some comparator on the insertion object. Insertion objects may be arbitrarily complex to include separate ordering values and internal data values. Additionally, more advanced implementations require some efficient way to track elements in the heap without O(n)
traversal operations. As a result, adaptable priority queues require a composite implementation which uses heaps as the basic data storage container along with an associative structure (usually a kind of map which are covered in a subsequent module) to provide efficient node lookups. As it turns out, priority queues are one of the more common structures used in real world problem solving, so the material is split to accommodate basic and more advanced implementations.
Introduction to Sorting
The study of heaps, and specifically the “heapify” algorithm, provides a natural segue into sorting. This is because heap sort provides a theoretically efficient algorithm. Heap sort guarantees O(n * log(n))
time complexity with the added advantage of being performed “in-place”, e.g. requiring no additional memory beyond the input array. In practice, however, heap sort often underperforms compared to algorithms like merge sort and quick sort. While those may use more memory (especially merge sort), they tend to make fewer comparisons and swaps, benefit from better cache locality, and are often faster on large datasets. As a result, merge sort and quick sort are more commonly used in real-world applications, despite heap sort’s strong theoretical properties.