Skip to content

Queues

Queues are list structures with specific rules about insertion and deletion, much like stacks. Unlike stacks there several major sub-variants that define specific queuing logic. The basic queue operates with first in, first out (FIFO) logic, like a line to order food. As people arrive they get in line and the cashier helps each customer in a “first come, first served” order. That is to say you “enter” the queue at the “back” of the line, and you’re served at the “front” of the line. The second major variant is a priority queue where the ordering of elements can be adjusted based on some priority qualification. To extend the service analogy, people may arrive to collect pick-up orders, or perhaps the restaurant serves seniors first. This priority queuing breaks the “first come, first served” queuing logic. For all variants a properly designed interface includes the queuing terms that may include enqueue, dequeue, and peek.

In computing you may see queues in how a networked printer handles print jobs, or how a server responds to more requests than it can process. Queues are also used heavily in graphs, which are covered elsewhere. In addition to these simple examples there are many more complex and specialized uses for queue structures. As a result it may be helpful to divinde queue implementations into memory-bound and un-bounded implementation types. For example, if your design requirements stipulate a fixed number of possible queue slots it may be easier to implement a memory-bound solution with arrays. These implmenetations will likely want to make use of index wrapping for efficient O(1) operations, otherwise you’d have to implement some loop to shift elements left with every dequeue/removal operation resulting in O(N) dequeue/removal operations. un-bounded implementations include dynamic arrays and linked lists, but these need special care to make sure you’re using them optimally. For example, linked-list implmenetations require potentially expensive allocations for each enqueue operation and vector-based

Examples

Because its pretty much always better to use array-based structures over linked structures for basic lists you may be wondering how to keep dequeuing operations efficient in a queue. For example if you shift elements left when dequeuing you bump the operation up to at least a O(n) operation. To avoid this you can implement what is known as a circular queue. The idea behind a circular queue is to leave elements in place and tracks the front and back of the queue as array indexes. However, you also need to implement some wrapping mechanism to avoid artificially shrinking the number of available elements. For example, if you enqueue five elements into an array-based list with 10 elements and then dequeue four, the queue contains only one item, but indicates that there are still only five available indexes, despite freeing up the first four. To accurately incorporate the newly freed slots at the beginning of the array you must implement some way to wrap the list. The obvious answer is to use modulus operations for both enqueuing and dequeing. Doing this allows you to retain efficient O(1) operations without shifting remaining elements after dequeuing. This example representes a technically memory-bound implementation of a circular queue. This is achieved by specifying the capacity at creation and imposing size checks for enqueue operations meaning that although Vec is technically an unbound implmentation (dynamic array) this structure remains bounded.

pub struct CircularQueue<T> {
pub data: Vec<Option<T>>, // Store elements as `Option` to allow reusing slots
front: usize,
back: usize,
size: usize,
capacity: usize,
}
impl<T> CircularQueue<T> {
/** Creates a queue that contains `capacity` number of elements in O(1) time */
pub fn new(capacity: usize) -> CircularQueue<T> {
let mut data = Vec::with_capacity(capacity);
data.resize_with(capacity, || None); // Fills `data` with `None`
CircularQueue {
data,
front: 0,
back: 0,
size: 0,
capacity,
}
}
/** Adds an element to the back of the queue in O(1) time */
pub fn enqueue(&mut self, item: T) -> Result<(), &str> {
// Ensures that the queue cannot take more elements than its capacity
if self.size == self.capacity {
return Err("Queue is full");
}
// Calculates the next available positionm, writes to it, and increases size
self.back = (self.front + self.size) % self.capacity;
self.data[self.back] = Some(item);
self.size += 1;
Ok(())
}
/** Removes and returns the front element of the queue in O(1) time */
pub fn dequeue(&mut self) -> Option<T> {
// Checks if queue is empty and returns proper None
if self.size == 0 {
return None;
}
// Otherwise take() the value from the front, leaving None in its place
let item = self.data[self.front].take();
// Properly advances the front index with wrapping
self.front = (self.front + 1) % self.capacity;
self.size -= 1;
item
}
}

Standard Library Implementations

Like the stack the queue is a pretty standard ADT so types and operations are included in both Rust and Java’s standard libraries. You can get away with using Java’s java.util.ArrayList or Rust’s std::vec::Vec, but both languages have optimized types for implementing queue structures as LinkedList or ArrayDeque in Java and std::collections::VecDeque in Rust. The following reference table assumes you’re using these specialized types.

OperationExplanationRust VecDequeJava ArrayDeque
EnqueueEnters an item to the “back” of the queue.push_back()addLast()
DequeueReturns (and removes) the “first” element of the queue.remove(0)removeFirst()
FirstReturns a reference to the first element of the queue.front()getFirst()
SizeReturns how many items are in the queue.len()size()
Is emptyReturns a boolean to indicate if the queue is empty or not.is_empty()isEmpty()

The Priority Queue ADT

Priority queues are lists where each element has an associated key. The element’s key determines the priority by which the element is handled instead of its insertion place. This is in contrast to the FIFO (first in, first out) structuring of a traditional queue. It is common for keys to be numbers, but any comparable object may be used as a key. Traditionally the minimal key represents the highest priority. For example, if you use integers as keys an element with a key of 4 has higher priority than an element with a key of 7. The ADT allows for multiple keys of the same value. If multiple elements contain equivalent keys the list’s operations may return arbitrarily ordered elements with the same key.

The ADT model is defined by the following operations:

  • enqueue(k,v) Creates an entry with key k and value v
  • min() Returns, but does not remove, an entry (k,v) with the lowest k value, else Null
  • remove_min() Returns and removes an entry (k,v) with the lowest k value, else Null
  • size() Returns the size of the queue
  • is_empty() Returns a Boolean indicating whether the list is empty

Implementing Priority Queues

Priority queues require arbitrary key types that must be defined to be comparable in a meaningful way. The comparison rule you choose must define a total (linear) ordering scheme. To implement an ordering scheme in Java you can use the Comparator or Comparable interfaces, and in Rust you can use the PartialOrd and Ord traits. The biggest difference are that only Comparable and Ord ensure total ordering of types, whereas Comparator and PartialOrd may require additional constraints to ensure total ordering of key types. If your types have a “natural order” (such as integers) these may be easily derived, but in general partial ordered types may require more involved rule definitions to ensure total ordering of types for your structure. For example, types like Strings (and UUIDs) satisfy the Ord trait in Rust, but floating point types do not.

Once you have your keys figured out you’ll need to figure out how to implement the base structure of the queue. A priority queue is just a list right? Why not use one of the (many) list implementations already covered? Indeed this is all of a sudden sounding very similar to the focus of the Fundamental Structures module. Before getting too far ahead of yourself, consider the following base structure analysis. The details are for the Rust language and uses the Vec type for array-based list structures, but results will be very similar across languages.

  • Unsorted array-based list (UAL): enqueue(k, v): O(1) Add the entry to the end of the list remove(k, v) / pop(): (min or max) O(n) The actual removal is O(1), but shifting elements requires O(n) min() / max(): O(n - 1) Because each entry must be compared size(): O(1) Checks the base list property assuming you’re using an off-the-shelf array-based implementation like an ArrayList or Vec

  • Sorted array-based list (SAL): enqueue(k, v): O(n) Finding the correct insertion point and shifting elements is O(n) remove(k, v) / pop() (min or max): O(1) If the target removal entry is at the end of the list min() / max(): O(1) The min/max is the head/tail of the list size(): (Same as UAL)

  • Unsorted linked list (ULL): enqueue(k, v): O(1) Simply add the entry at one of the ends of the list remove(k, v) / pop(): (min or max) Linked list removals are O(1), but getting to the right spot takes O(n) time min() / max(): O(n - 1) Because each entry must be compared size(): O(n) The operation must traverse the entire list, unless you maintain a counter which would make this a O(1) operation

  • Sorted (doubly-) linked list (SLL): enqueue(k, v): Navigating to the correct insertion point is O(n) remove(k, v) / pop() (min or max): O(1) The target removal entry is either the head or tail of the list min() / max(): O(1) The min/max is the head/tail of the list size(): (SAme as ULL)

OperationUALSALULLSLL
enqueue(k, v)O(1)O(n)O(1)O(n)
dequeue(k, v)O(n)O(1)O(n)O(1)
min()/max()O(n - 1)O(1)O(n - 1)O(1)
size()O(1)O(1)O(1)O(1)
is_empty()O(1)O(1)O(1)O(1)

Notice that there is pretty even tradeoff between enqueue and dequeue operations across the considered implementations. Given that both operations will likely happen at a ratio close to 1:1, the general convenience and performance of an array-based implementation probably eeks out the competition here, though there is no real big difference.

The following snippet illustrates a simple Vec-based implementation of the priority queue structure. Only code snippets are provided here. See the project’s repo for details. The queue itself exists as a struct with a single data field. Each data entry contains an Entry<K, V> type. This is a very simple example, so the most notable operation is the O(n) enqueue() operation which provides sorting logic. Notice that this operation actually wraps the Vec::insert() method which itself runs in O(n) time to shift remaining values to make room for the inserted value.

pub struct Entry<K, V> {
key: K,
value: V,
}
...
pub struct SortedVecQueue<K, V> {
pub data: Vec<Entry<K, V>>
}
...
impl<K, V> PriorityQueue<K, V> for SortedVecQueue<K, V>
where K: Ord {
type Entry = Entry<K, V>;
//NOTE: Provides a wrapper for Vec::enqueue() which runs in O(n) time
fn enqueue(&mut self, key: K, value: V) -> Result<(), Box<dyn std::error::Error>> {
if Self::check_key(&key) {
let mut insertion_index = self.data.len();
// Finds the correct insertion index
for (i, e) in self.data.iter().enumerate() {
if key >= e.key {
insertion_index = i;
break;
}
}
let entry = Entry::new(key, value);
self.data.enqueue(insertion_index, entry); // Actually calls Vec::enqueue()
Ok(())
} else {
Err("Invalid key".into())
}
}
...

This example test illustrates how the list can be used.

#[test]
pub fn example() {
use crate::lists::queues::priority_queue::sorted_list::{
PriorityQueue,
SortedVecQueue
};
// Instantiates new list, declares the K and V types
let mut list: SortedVecQueue<usize, &str> = SortedVecQueue::new();
// Pushes a bunch of values with an associated key priority to the list
list.enqueue(3, "Peter").ok();
list.enqueue(5, "Bobson").ok();
list.enqueue(2, "Brain").ok();
list.enqueue(4, "Dingus").ok();
list.enqueue(6, "Dorkus").ok();
// Checks that the list is taking entries properly,
// and that the peek() operation matches expectations
assert_eq!(list.size(), 5);
assert_eq!(list.peek(), Some("Brain").as_ref());
// Creates a "sorted" list of dequeued items
let mut queue: Vec<&str> = Vec::new();
// Using the while loop avoids having to deal with partial moves in a for loop
while !list.is_empty() {
if let Some(v) = list.dequeue() {
queue.push(v)
}
}
// Checks that the final result of the queue logic is correct
assert_eq!(queue, vec!["Brain", "Peter", "Dingus", "Bobson", "Dorkus"])
}

But what if there was a better way? What if you weren’t satisfied with trading linear and constant time for enqueue and dequeue operations? The good news is that you don’t have to! You can actually implement this structure with a heap to provide enqueue and dequeue operations that run in O(n log(n)) time. Heap structures are technically trees, but it is possible to implement binary heaps relatively easily with an array-based structure. Best of both worlds! Though not strictly required, you can sort a heap with heap sort in O(n * log(n)) time. What is the benefit over O(1)/O(n) options on sorted lists? Using a binary heap allows for enqueue and dequeue operations that require O(log n) time, so the amortized costs pay dividends. Fancy!