Skip to content

Lists

Sequential structures are so fundamental to programming and computer science that it can be difficult to choose where to start. Just about every non-trivial program will use a list either explicitly as a part of the program or implicitly in the runtime’s use of call stacks (a type of list) in the program’s execution.

This section focuses on the list ADT and covers linear, sequence-based data structures. The companion “Core Lists” material in GitHub illustrates several list variations, some of which use even more primitive list constructs. The material is intended to cover several foundational concepts that are used for even more complex data structures. This includes array constructs, array-based types like dynamic arrays, and several linked list structures for academic purposes.

The linear nature of the list structures discussed here means that each element of a list (besides the first and last elements) has a single predecessor and a single successor that you can follow sequentially from start to finish. Imagine a list with arbitrarily complex data elements A0, A1, A2, …, An-1. In this way you can say that lists are ordered. It’s important to note that the term “ordered” refers to the memory structure of the list itself and does not necessarily extend to the contents of the list’s elements. That is to say you can navigate a list’s elements, forwards or backwards (or up/down), in sequential order regardless of the data that the elements contain. Lists with elements that have data that exist in some comparative, monotonically ascending or descending arrangement are considered sorted. All linear lists are ordered, but only some linear lists are sorted.

The two most common ways to implement lists include indexed lists and linked lists. Each of these approaches has their benefits and drawbacks, though implementations using indexed constructs/types are probably the preferred approach for most solutions. Indexed lists make it easy to access elements sequentially by some iterative (or recursive) methodology. You can also just as easily access arbitrarily positioned elements directly in O(1) time, making those operations very efficient. Implementing indexed lists is easy with array constructs or array-based types, and this text mostly uses Rust’s vector type.

Linked lists, by contrast, are structures that are made of individual data objects, called nodes, that contain pointers to other nodes. Linked lists do not have direct indexes and instead deal with relativistic terms such as “first”, “next”, “previous”, and “last” elements. As a result it’s not possible to directly access arbitrarily positioned nodes in the list without tracking a pointer to that node or its data directly. Instead you commonly have to traverse the list node-by-node in sequential order to operate on arbitrary elements. To traverse a linked list you start with the “first” element A0 or the “last” element An-1 and work your way through “next” and/or “previous” pointers to the desired node. Positional list ADTs are a common list sub-type that allow you to track your place in the list with a position or “cursor”. Its common to use linked lists when implementing simple linear positional lists. Later sections detail non-linear positional lists such as trees and skip lists where additional structures, including array-based implementations, may make more sense.

The List ADT

The basic list ADT includes the following (non-exhaustive) set of functions:

  • add(e) Adds an element e to the list (typically an append operation)
  • remove(e) Removes an element e from the list
  • find(e) Returns a Boolean indicating whether element e is in the list
  • size() Returns the number of elements
  • clear Clears/empties the list
  • is_empty Returns a Boolean indicating whether the list is empty

There are many list ADT sub-types and special use cases. One common list sub-type is the positional list which adds functionality that deals with specific locations within the list called positions. Positions are essentially just references or pointers to elements in the list. In an indexed list implementation the position is essentially just the index value. In a linked structure the position is essentially the reference or pointer to a node in the list. These functions can be implemented instead of or in addition to the basic list ADT functions and include:

  • insert(p, e) Adds an element e to the list at position p
  • set(p, e) Sets the position p to element e
  • get(p) / get_kth(p) Returns the data at position p
  • remove(p) / remove_kth(p) Removes (and optionally returns) an element at position p
  • find(e) Alternative implementation that typically returns the first position of an element e

When implementing positional lists you can use iterators like Java does with its LinkedList<E> type or with cursors like Rust does with its LinkedList<T> type. The main difference is semantics; Java supports iterators with mutable access over the elements they point to, whereas Rust guarantees that iterators either provide immutable references to their elements or simply consume them, and instead encourage you to implement cursors for mutable access to members of a collection. The sticking point is what kind of knowledge or access the traversal object has over the elements of the collection.

Examples

This text uses a podium (or scoreboard) model to illustrate several concrete implementations based on the list ADT. The implementations use both indexed and linked list structures. For efficiency (and reader attention) this text only provides code snippets. Full code examples (written in Rust) can be found over in GitHub.

There is no standard way to define a podium or scoreboard list. For example, the scoreboards for an athletic event may require additional time formatting elements, a videogame may require additional points arithmetic components, etc. For a mix of academic purposes this text defines a simple podium list interface that allows a user to:

  • Check the list’s size (number of significant elements)
  • Insert an entry into the list
  • Print both the entire list and its top three entries
  • The list must exist in a real-time sorted state
  • Remove a name and its associated score from the list:
    • without leaving any indexes blank (for indexed implementations) or
    • retaining structural integrity (for linked list implementations)
  • Update an entry’s score, maintaining a real-time sorted state by score
  • Advanced implementations allow optional scores to be reset after the entry is added to the list

Each subsequent criterion generally represents a slightly more difficult ask. When designing and implementing your programs (or libraries) it can be helpful to provide similarly abstract models, requirements, or interfaces before you start coding. This approach helps to frame the implementation based on its interface(s) and how you plan on using it. You can list the criteria by whatever metrics you deem most important, but the point is that this approach allows you to conceptualize the structure first before digging into details that you can get lost in.

The rest of this page explores the concepts behind solutions for this basic model. The material covers indexed and linked list structures in greater detail. The text also includes a brief introduction to iteration. The next page on stacks and queues builds of these approaches for some additional, commonly-encountered list ADT variants.

Indexed Structures

This section discusses array and array-based constructs and types. It also covers some finer details around using the structures to implement the scoreboard/podium example structure.

Static Arrays

The first list structure most programmers come across is the humble array. Arrays (and array-like structures) are so fundamental to programming that they are often included as built-in language constructs where they often have their own dedicated syntax. The actual definition of arrays can vary, but it’s common to define classical (static) array structures as fixed-capacity allocations designed to store collections of homogeneously-typed elements that occupy contiguous regions of memory.

Array elements are accessible by index values. The range of index values represents the number of elements you can fit within the allocated space, commonly referred to as the array’s capacity or length. As you add elements to an empty array you change its size, but its capacity remains static. Arrays are commonly 0-indexed, meaning that an array with a capacity of three elements contains indexes 0, 1, and 2. Arrays can also be 1-indexed, but these are less common among modern language constructs.

Classical arrays are commonly limited to a singular “base type” which gives the elements known, fixed size at compile time that ensures fast and easy indexing. If you know where the structure starts in memory and how big each element is then finding the memory address of an individual element becomes a simple calculation. For example, if you declare an array of length 3 that holds an arbitrary sequence of three 32-bit integers (without any metadata or padding) then the array’s 0th, 1st, 2nd index might correspond to memory addresses with roughly 32-bit offsets. This might look like 0x7fff0000, 0x7fff0004, and 0x7fff0008 in a 64-bit operating system. To access the values at those addresses you’d use each of the array’s indexes, not the memory addresses directly as those addresses are unknown to the source code. Some languages implement more complex logic to allow mixed base types, but those implementations often rely on some generalized base type or pointer to the element’s data instead of directly storing data at the allocated address(es).

Implementing classical array structures requires access to low level language constructs to directly manipulate system memory for allocation, addressing, and manipulation. The low-level nature of classical array structures means it is not possible to implement them in all languages, and this text doesn’t cover these details. Fortunately just about every high-level programming language ships with array constructs that you can use to implement more interesting list structures. That being said it may be worth taking a look at your favorite language’s array implementation because this text uses arrays and array-based structures pretty extensively to illustrate even more complex structures.

The following examples illustrate declaring and initializing array objects.

Java
// Declares an array with a capacity of 10 64-bit floating "double" integers
double[] name = new double[10];
// Creates an array of Object types that each hold (point to) different sub-types
// Oh, the power of inheritance
Object[] hello = new Object[] {"hello", 3, true};
Rust
// An array with a capacity of 10 64-bit floating-point values
let array_x: [f64; 10];
// Creates an array of inferred types with 10 indexes, Rust defaults to i32 types here
let array_y = [0; 10];

The following example snippet uses an array construct to implement the podium list. Notice that the array must be declared with a compile-time constant in Rust. This makes all owned structures built with the array construct static in Rust. Good sorting algorithms require an average of O(n log n) time. This structure requires O(n) time for each insertion. In many cases, adding elements in O(1) time and sorting in O(n log n) is more efficient, especially for large lists, since it avoids repeated shifting or rebalancing of elements with each operation. However, the sorted insertion approach is generally more performant if the list’s must remain sorted in real time and n is small enough for O(n^2) to be manageable, typically less than a few hundred elements.

Rust
// Compile-time constant used to set the array size
const PODIUM_SIZE: usize = 10;
...
pub struct Podium {
data: [Option<Entry>; PODIUM_SIZE],
size: usize,
}
impl Podium {
/** Creates a list that contains `const PODIUM_SIZE` number of elements with indexes
from 0 to (PODIUM_SIZE - 1) */
pub fn new() -> Podium {
Podium {
data: [const { None }; PODIUM_SIZE],
size: 0,
}
}
/** Adds entry to list by score to maintain a sorted list in O(n) time;
Does not overflow with attempts that exceed the initialized structure size,
but does not log additional entries without sufficiently high score values */
pub fn add<'a>(&mut self, name: &'a str, new_score: Option<usize>) {
// Evaluates the existing array values to find the first appropriate index
let mut insert_index = None;
for i in 0..self.data.len() {
if self.data[i].is_none() || self.data[i].as_ref().unwrap().score < new_score {
insert_index = Some(i);
break;
}
}
// Shift elements to the right of the insertion index to make room
if let Some(index) = insert_index {
for j in (index..self.data.len() - 1).rev() {
self.data[j + 1] = self.data[j].clone();
}
let new_entry = Self::entry(name.to_string(), new_score);
self.data[index] = Some(new_entry);
}
}
...
}

Dynamic Arrays

Dynamic arrays are array-based structures that provide functionality to grow or shrink the object’s capacity throughout its lifetime. You can choose to implement manual and/or automatic re-sizing operations as desired. Automatic re-sizing generally involves checking the size of the list against its capacity before attempting to add (or remove) elements. If there is not enough (or way too much) room for the list’s new size the operation automatically clones the object to a new region of memory with some new capacity before adding or removing the new element. Re-sizing operations are inherently expensive due to the number of elements that must be moved to the new memory location. When designing a dynamic array structure you can choose to re-size arithmetically by adding/removing a fixed number to the list’s capacity, or you can choose to re-size geometrically by adding/removing some multiple of the current capacity. Mathematically you can prove that geometric re-sizing is the most efficient, leading to amortized O(1) addition functions despite occasionally expensive O(n) move operations. It is out of scope of this text to prove the optimum geometric progression, but it’s common to see dynamic array implementations increase the size of the underlying array geometrically by a factor of two thereby doubling the list’s capacity rather than adding space for some fixed number of new elements arithmetically. Because time complexity is not the only concern when designing efficient data structures and algorithms it is similarly common to impose some analysis and geometric re-sizing operation when the list is reduced. Again it is out of scope to provide proofs for sweet spot(s), but it is common to halve the list’s capacity when the size reaches a quarter of the list’s current capacity. Re-sizing can be implemented as automatic operations within remove functions or as stand-alone trim functions for the type. Many popular languages include stand-alone functions to trim the list manually when desired. For example Java has trimToSize() on its dynamic array type ArrayList<T>, and Rust has shrink_to_fit() on Vec<T> for manual downsizing operations.

Because dynamic arrays are just arrays with additional logic, you’d think that simply refactoring an array-based list implementation would be simple. In some languages this may be true, but in Rust you must initialize arrays with a compile time constant which means that you cannot actually implement a dynamic array on top of the array construct in Rust. Indeed Rust’s own dynamic array implementation Vec represents a stand-alone type built from scratch. In general Java’s ArrayList<T> and Rust’s Vec<T> types are probably more useful than bare bones arrays for most solutions.

The following example illustrates idiomatic, geometric re-sizing logic in a dynamic array-based list implementation. Keen eyes will notice that the example actually uses Vec<T> instead of the array construct. This is due to the previously mentioned compile-time constant declaration and suitability of array constructs in Rust. Implementing either an array or dynamic array type from scratch is beyond the scope of this text, so Vec<T> will have to do. It bears mention that the Vec<T> type is already a fully-functional dynamic array type, but this example forces some re-sizing logic to illustrate the concept anyway—pay no attention to the Vec behind the curtain. The example also uses an idiomatic Clone implementation for the re-size operation. This ensures type safety, but can incur overhead for larger clone operations on huge or frequently fluctuating lists. If the clone() operations create a bottleneck you may want to explore the dark arts of the unsafe world with memcpy or ptr::copy options. The example makes trim() private and calls it from remove(). Automatic downsizing is something that Vec<T> does not do. You could also easily implement a trim_to_size() function similar to Rust’s Vec::shrink_to_fit() as a stand-alone function.

Rust
struct Entry<'a> {
name: &'a str,
score: Option<i32>,
}
fn build(name: &str, score: Option<i32>) -> Entry {
Entry { name, score }
}
// Required for an easy, idiomatic re-size operation
impl<'a> Clone for Entry<'a> {
fn clone(&self) -> Entry<'a> {
Entry {
name: self.name,
score: self.score,
}
}
}
pub struct List<'a> {
data: Vec<Option<Entry<'a>>>,
size: usize,
}
impl<'a> List<'a> {
...
/** Takes a name and optional score, creates a entry, and inserts it into the list;
* If the addition places the list size at or above capacity, the function increases
* the list capacity by a factor of two */
pub fn insert(&mut self, name: &'a str, score: Option<i32>) {
// Checks the list's size against its capacity and
// grows geometrically to accommodate new entries
if self.size + 1 >= self.data.len() {
self.data.resize(2 * self.data.len(), None);
}
// Finds the (first) index that:
// - Places the new entry at the end of the list if its score is None or
// - Has data with a score less than or equal to the new entry's score
let i = if score.is_none() {
self.size
} else {
(0..=self.size)
.find(|&j| self.data[j].as_ref().map_or(true, |n| n.score <= score))
.unwrap_or(self.size)
};
// Shift elements to make room for the new entry
for j in (i..self.size).rev() {
self.data[j + 1] = self.data[j].clone();
}
// Builds the entry, inserts it, and increments the list's size
let entry: Entry = build(name, score);
self.data[i] = Some(entry);
self.size += 1;
}
...
/** Attempts to remove (and return) the data that matches the input name */
pub fn remove(&mut self, name: &'a str) -> Result<&'a str, String> {
// Uses Iterator::find() to identify the index of an entry that matches the name input;
// No special syntax: this block has an awkwardly long find expression
if let Some(i) = (0..=self.size).find(|&i| {
self.data[i]
.as_ref()
.map_or(false, |entry| entry.name == name) // Finds matching name or returns false
}) {
// If a match is found shift entries to the left to fill the gap
for j in i..self.size {
self.data[j] = self.data[j + 1].clone();
}
// Decrement the list's size, call the trim function, and return the name
self.size -= 1;
self.trim();
Ok(name)
// If no match is found the function surfaces an Err
} else {
let err = format!("No match on name {name}");
Err(err)
}
}
/** Halves the list's capacity (down to a min size of 1) if the size is <= 25% of capacity */
fn trim(&mut self) {
let capacity = self.data.len();
if self.size <= capacity / 4 && capacity > 1 {
self.data.resize(capacity.max(1) / 2, None);
}
}
...
}

Linked List Structures

Linked lists are list data structures that consist of discretely allocated data node objects that contain some “payload” data and at least one reference (or pointer) to another node. Instead of existing in contiguous regions of memory like an array, each node typically exists in a separately allocated region of memory. The reference to another node is the link in the linked list. Instead of accessing a list’s nodes directly you traverse the list sequentially, starting at some designated node like the “head” (start) or “tail” (end) node, and follow the links to the desired node you want to access.

It is common to categorize or define linked list variations based on how they’re linked. For example, a singly-linked list contains only one pointer per node resulting in only one way to traverse the list. Singly-linked lists typically contain references to the “next” node in the list. A doubly-linked list includes two references so you can traverse the list forwards and backwards with “next” and “previous” node references. A circularly-linked list connects the head to the tail nodes for a looping traversal. Some linked lists contain sentinel nodes that do not contain any payload information but simply serve as markers for the start and end of the list. Sentinels are less convenient in Rust due to the ownership and borrowing rules with interior/shared mutability, but sentinel lists are very popular in Java. Linked list structures can be intrusive by holding references to the head/tail within the node structure itself, or non-intrusive by implementing separate node and list structures that hold just the data/elements and references to the nodes, head, and tail respectively.

Unlike array-based lists, operations on arbitrary nodes can be inefficient for linked lists with a large number of nodes. There are, of course, workarounds, but a chosen data structure’s elegance comes from how appropriate it is for the given use case(s). For example, while the raw insertion or deletion of an arbitrary node in a linked list can be accomplished in O(1) time due to the fact that you don’t need to shift any elements to make room, it still generally takes O(n) time to locate the insertion/deletion point. Although relatively rare in the real world, linked lists can be found in video game engine code, memory allocators, and some common stack-based operations.

The Rust implementation for a doubly-linked list is much trickier than any Java implementation because each node represents a mutable, shared resource—something that Rust goes to great lengths to prevent in its safe APIs. To implement a doubly-linked list in Rust you will probably require unsafe code to achieve a list with any elegance. Of course using unsafe Rust gives you access to all the foot-guns that Rust was meant to make obsolete. The good news is that the Rust community developed Miri to keep you from injuring yourself too badly. Miri is a Rust interpreter that exists to use embedded tests to detect undefined behavior (UB) and even some memory leaks. The catch, naturally, is that it only checks what you give it, so its advantageous to write many tests for all sorts of use cases. See the repo for the full set of tests.

This example illustrates a naive, unsafe doubly-linked list implementation for the sorted podium list. Much of the implementation details are scrubbed for brevity. See the full implementation for details. The actual list structure only consists of a head pointer, a tail pointer, and a size. The contents of the list are accessible only via list traversal. The insert() method is rather lengthy because it covers four special use cases; inserting the first node of the list, replacing the head node, replacing the tail node, and inserting the new node mid-list to maintain sorted values. The last two cases use something like a cursor to track relative node values, dereferencing “next” node pointers to determine placement and then setting all values as necessary. Inserting mid-list requires a specific order of operations to ensure data integrity. Always accompany unsafe operations with more Miri tests than you think you need.

Rust
// Raw pointers to some Node
type Link = Option<*mut Node>;
pub struct Node {
pub name: String,
pub score: i32,
prev: Link,
next: Link,
}
...
/** The actual list structure */
pub struct List {
head: Link,
tail: Link,
length: usize,
}
impl List {
...
pub fn insert(&mut self, node: Box<Node>) {
// Gets a raw, mutable pointer to the (new) unique heap object
let new_node_ptr: *mut Node = Box::into_raw(node);
unsafe {
// Special case for empty list
if self.head.is_none() {
// Sets the new node's pointers to None
(*new_node_ptr).next = None;
(*new_node_ptr).prev = None;
// Sets the list's initial head and tail pointers, increments the list size
self.head = Some(new_node_ptr);
self.tail = Some(new_node_ptr);
self.length += 1;
return;
}
// Special case for inserting new head node
if (*new_node_ptr).score > (*self.head.unwrap()).score {
// Sets the new node's next pointer to the current head
(*new_node_ptr).next = self.head;
// Sets the original head's prev pointer to the new node
(*self.head.unwrap()).prev = Some(new_node_ptr);
// Resets the list's head and increments the list size
self.head = Some(new_node_ptr);
self.length += 1;
return;
}
// Traverse the list to find the correct insertion point by peeking at the next node
let mut current = self.head;
while let Some(current_ptr) = current {
let current_node = &mut *current_ptr;
// Special case for inserting new tail
if current_node.next.is_none() {
(*new_node_ptr).prev = Some(current_ptr);
(*new_node_ptr).next = None;
current_node.next = Some(new_node_ptr);
// Resets the list's tail pointer and increments the list size
self.tail = Some(new_node_ptr);
self.length += 1;
return;
}
// Inserts mid-list;
// If the next node's score is less than the new node's score
// insert the new node between current and current.next
else if (*current_node.next.unwrap()).score <= (*new_node_ptr).score {
// b.prev -> a
(*new_node_ptr).prev = Some(current_ptr);
// b.next -> c
(*new_node_ptr).next = current_node.next;
// c.prev -> b
(*current_node.next.unwrap()).prev = Some(new_node_ptr);
// a.next -> b
current_node.next = Some(new_node_ptr);
// Increments the list size
self.length += 1;
return;
}
current = current_node.next;
}
}
}
...
}

Iteration

Working through the sequence of elements is possibly the most common operation done to lists. Iteration and recursion allow you to write succinct code that applies the same operation to each element in a collection. Naturally the same rules about runaway recursion apply to (especially large) lists, so it is generally recommended to implement some form of iterative processing instead. Most indexed constructs ship with some way to iterate over the collection, but you’ll need to implement your own iterative methods for linked list implementations. This example illustrates language constructs to iterate over indexed structures.

Java
// Uses Java's "enhanced" for loop to write and print values
// in an array with a length of 10
int[] a = new int[10];
int counter = 0;
for (int i : a) {
a[counter] = counter;
counter++;
}
System.out.println(Arrays.toString(a));
System.out.println(a[5]); // Index is 0-indexed so this is the 6th element
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
5
Rust
// Illustrates idiomatic iterator approach;
// Declares an array of signed, 32-bit values and a length
// of 10 before creating and using an iterator object to
// write consecutive values to each index in the array
let mut a_x: [i32; 10] = [0; 10];
let mut c = 0;
for i in a_x.iter_mut() {
*i = c;
c += 1;
}
// Illustrates a potentially more efficient (but less idiomatic)
// approach to using Rust's iterators;
// Declares an array of signed, 32-bit values and a length
// of 10 before using direct indexing to write consecutive
// values to each index in the array
let mut a_y: [i32; 10] = [0; 10];
for i in 0..a_y.len() {
a_y[i] = i as i32;
}
println!("{:?}\n{:?}", a_x, a_y);
println!("{}", a_y[5]);
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
5

Collections that do not ship with iterative or recursive constructs may require you to implement your own. Implementing your own iterator generally involves defining a next() function on your collection such that calling it returns the next element from wherever you started. This example illustrates implementing Iterator and DoubleEndedIterator for a doubly-linked List type in Rust. The Iterator trait requires the next() function definition that allows you to define and call iter() (or similar) on your collection. This returns an iterator that calls next() to traverse the list. Similarly, the DoubleEndedIterator requires a next_back() function that allows you to call rev() on your previously-defined iterator to traverse the list backwards from the tail. Implementing these interfaces (called traits in Rust) provides the ability to call self.iter() and self.iter().rev() respectively, as illustrated in the last two print functions of the truncated impl List block.

Rust
impl List {
...
pub fn iter(&self) -> Iter {
Iter {
next: self.head.as_ref().map(|&ptr| unsafe { &*ptr }),
prev: self.tail.as_ref().map(|&ptr| unsafe { &*ptr }),
}
}
pub fn print(&self) {
let mut counter = 1;
for e in self.iter() {
println!("{:>2}: {:<8} {:>6}", counter, e.name, e.score);
counter += 1;
}
println!("")
}
pub fn print_rev(&self) {
let mut counter = self.length;
for e in self.iter().rev() {
println!("{:>2}: {:<8} {:>6}", counter, e.name, e.score);
counter -= 1;
}
println!("")
}
}
...
pub struct Iter<'a> {
next: Option<&'a Node>,
prev: Option<&'a Node>,
}
impl<'a> Iterator for Iter<'a> {
type Item = &'a Node;
/** Returns each Node in the list until there are None */
fn next(&mut self) -> Option<Self::Item> {
self.next.take().map(|current| {
self.next = current.next.as_ref().map(|&ptr| unsafe { &*ptr });
current
})
}
}
// Enables the use of rev() on Iterator
impl<'a> DoubleEndedIterator for Iter<'a> {
fn next_back(&mut self) -> Option<Self::Item> {
self.prev.take().map(|current| {
self.prev = current.prev.as_ref().map(|&ptr| unsafe { &*ptr });
current
})
}
}

Comparisons

You can use static arrays, dynamic arrays, and linked lists to solve all sorts of similar problems. In theory linked lists offer some competitive operational/algorithmic advantages to array-based list data structure implementations. This is mostly because of the distributed nature of their memory usage.

Adding elements to the front or rear of linked lists require O(1) time, and you can often add single nodes on the heap faster than it would take to move an entire array to a new block of memory when you reach it’s capacity. Additions to the end of array-based lists (within capacity) is also O(1), but adding to the front of the list requires shifting elements and ends up requiring O(n) time. Inserting elements into array-based lists at capacity requires potentially expensive single O(n) re-sizing operations, but overall these operations are amortized to O(1) time.

Arbitrarily positioned insertions and deletions on both indexed and linked structures outside of head and tail nodes require something more like O(n) time. Technically these operations only require O(1) operations on linked list, but unless you’re caching position data you also need to find the node which requires O(n) list traversal. Linked lists still may be more efficient for insertion/deletion operations of very large elements, but in general dynamic arrays are generally more efficient, especially for smaller nodes and operations in the middle of the set.

Accessing data (get operations) in both indexed and linked list structures only take O(1) time, but for linked lists you’ll also need to find the node which may take up to O(n) time.

Conclusion

Due to reasons mostly out of scope of this text, including the frequency and cost of linear time memory allocations for individual node operations, linked lists aren’t used much in the real world. This is mostly due to the fact that linked lists became popular before some major advancements in computer architectures. Modern computers make efficient use of caching that works best with contiguous, or more accurately nearby, regions of memory as opposed to the random access required by non-contiguous memory structures present in linked lists. The re-sizing logic of dynamic arrays also amortizes operations down to similar linear time complexities as linked list operations. Indeed most of the theoretical benefits of linked lists are often minimized by properly designed dynamic array structures. According to the author(s) of Learn Rust With Entirely Too Many Linked Lists the Vec and VecDeque in Rust are,

...blatantly superior data structures for most workloads due to less frequent allocation, lower memory overhead, true random access, and cache locality.

The book goes on to name some reasons where a linked list may be superior to a dynamic array. It should be noted that these use cases are rare, potentially solve difficult/specialized problems, or don’t actually relate to the language choice itself.

  • Intense amounts of splitting or merging in (very) large lists
  • Lock-free concurrency
  • Resource-constrained environments (kernels and embedded) that warrant intrusive lists
  • Pure functional languages with limited semantics and absence of mutation

Both array and linked list structures are available to most high-level language’s standard libraries, so this is mostly just an academic exercise anyway; you will likely not be implementing either of these structures on a regular basis. This text uses linked lists primarily as an introduction to data structures and algorithms as a concept, a lead-in to more complex data structures, and a way to encourage language mastery.