Nodes connected by pointers. O(1) insert/delete, O(n) access.
Deep-dive lesson - accessible entry point but dense material. Use worked examples and spaced repetition.
Arrays feel like books with numbered pages: jump straight to page 57. Linked lists feel like a treasure hunt: each clue points to the next clue. That “next clue” idea is simple, but it unlocks many core CS structures like stacks, queues, and graph traversal.
A linked list is a chain of nodes. Each node stores (1) a value and (2) a pointer/reference called next to the next node. The list starts at head and ends when next is null. You can insert/delete near a known node in O(1), but accessing the k-th element requires walking from head, which is O(n).
A linked list is a linear data structure made of nodes, where each node knows how to reach the next node.
A singly linked list node contains:
A list also has a special reference:
And it uses a special terminator:
So the shape is:
head → [value | next] → [value | next] → [value | null]
Arrays are great when you need fast random access: element i is reachable in O(1). But arrays typically assume elements are stored in a contiguous block of memory, which makes some operations expensive:
Linked lists trade away random access to gain a different power: local rewiring.
If you have a pointer to a node, you can often insert or delete next to it by changing a small number of pointers—no shifting required.
Even if you’ve never used C/C++ pointers, you can understand linked lists with these minimal ideas:
1) Reference / pointer: a variable that “points to” an object/node.
2) null: a special value meaning “no object here.”
3) Big-O: a rough way to describe how runtime grows with input size n.
4) Basic pseudocode: we’ll write steps like:
That’s it.
To reason safely, keep these mental rules:
Suppose we store 3 → 7 → 9.
If you want the value 9, you must walk:
That “walk” is the central linked-list operation.
Let n be the number of nodes.
We’ll make these precise in the next sections.
With arrays, “go to index i” is direct. With linked lists, the default operation is: start at head and follow next pointers.
Traversal is how you:
Because traversal is common, it’s also the main source of O(n) cost.
We use a moving pointer/reference, often called curr.
Pseudocode:
This loop ends because eventually curr becomes null (the null terminator).
We keep a counter.
If there are n nodes, the loop runs n times ⇒ O(n).
We scan until we find it or hit null.
Worst case: target is not present (or at the end), so you examine all n nodes ⇒ O(n).
To get element k, you move k steps.
Runtime: in the worst case k ≈ n ⇒ O(n).
Many operations (especially deletion) require knowing not just curr, but also the node before it.
Pattern:
After the loop:
This “prev/curr” pair shows up everywhere.
Traversal is the price you pay for the benefits you’ll see next: simple pointer rewiring.
In an array, inserting at the front requires shifting all n elements (O(n)).
In a linked list, inserting at the front is usually just:
1) create a new node
2) point it at the old head
3) move head to the new node
That’s a constant number of operations ⇒ O(1).
The key insight: you don’t move the nodes; you change the links.
Assume we have:
We want to insert X at the front.
Steps:
1) X.next = head
2) head = X
Result:
This is O(1).
Suppose you already have a reference to node A, and you want to insert X right after it.
Before:
Steps:
1) X.next = A.next (so X points to B)
2) A.next = X (so A points to X)
After:
Also O(1) (because it’s a constant number of pointer changes).
If you do A.next = X first, you lose the original link to B unless you saved it.
Correct safe order:
If you have node A, and you want to delete the node right after it (call that node B), you can bypass B.
Before:
Steps:
1) A.next = B.next (A points directly to C)
2) (optionally) free/delete B in languages that require it
After:
Again O(1).
Deleting the first node is special because it changes head.
Before:
Steps:
1) head = head.next
2) (optionally) free/delete old A
After:
O(1).
Now combine traversal with rewiring.
Goal: delete the first node whose value == target.
We need prev/curr:
Cases:
1) target is at head: update head
2) target is in middle/end: set prev.next = curr.next
3) target not found: do nothing
This is O(n) because you may scan the list.
Pseudocode:
| Operation | What you need | Time | Why |
|---|---|---|---|
| Traverse/print | head | O(n) | must follow next links |
| Search by value | head | O(n) | may inspect all nodes |
| Access k-th | head, k | O(n) | move step-by-step |
| Insert at head | head | O(1) | constant rewiring |
| Delete at head | head | O(1) | constant rewiring |
| Insert after node A | pointer to A | O(1) | constant rewiring |
| Delete after node A | pointer to A | O(1) | constant rewiring |
Nodes typically live separately in memory. Each node stores extra “overhead” (the next pointer). That overhead is part of the trade-off:
At this level, the essential skill is: draw the pointers and perform pointer rewiring carefully.
Linked lists are a “building block” data structure. Once you can:
…you can implement several important abstractions.
A stack supports:
Using a linked list:
Why it fits: the top of the stack can be the head of the list.
A queue supports:
Linked list approach:
If you maintain both head and tail pointers:
Without a tail pointer, enqueue requires walking to the end ⇒ O(n). This is a nice example of how one extra reference can change performance.
Graphs are “nodes connected by references” too. Linked lists are the simplest version: each node has only one outgoing edge (next). When you later learn BFS/DFS, you’ll still be following pointers/references—just with branching.
Use a linked list when:
Prefer an array/dynamic array when:
Linked list skills are mostly about maintaining correct structure:
If you can do that, you’re ready to build stacks, queues, and start thinking in terms of pointer-based structures like trees and graphs.
You have a list: head → 10 → 20 → 40 → null. You have a reference to the node holding 20. Insert a new node holding 30 right after 20, producing: head → 10 → 20 → 30 → 40 → null.
Name the known node A = node(20). Let B = A.next (currently node(40)).
We want: A → X → B, where X is node(30).
Create the new node X with X.value = 30.
Set X.next = A.next.
This makes X.next point to B (node(40)).
Now the chain conceptually is: A → B and X → B.
Set A.next = X.
Now A points to X, and X points to B.
The list becomes: head → 10 → 20 → 30 → 40 → null.
Verify you did not lose node(40): it is still reachable from head by following next pointers.
Insight: The safe insertion pattern is always: (1) new.next = old_next, then (2) old.next = new. If you reverse the order without saving old_next, you can disconnect the rest of the list.
Delete the first node whose value is 7 from: head → 3 → 7 → 7 → 9 → null. Only the first 7 should be removed.
Initialize two pointers:
prev = null
curr = head (node with value 3)
Check curr.value:
Update:
prev = curr (now prev is 3)
curr = curr.next (now curr is the first 7)
Check curr.value again:
Since prev != null, this is not deleting the head.
Rewire:
prev.next = curr.next
That is: node(3).next now points to the second 7.
Optionally free/delete curr depending on the language.
Now the list is: head → 3 → 7 → 9 → null (where this 7 is the original second 7).
Verify by traversal from head: 3, then 7, then 9, then null.
Insight: Deletion is usually “bypass the node.” To bypass safely, you often need the node before it (prev). The head case is special because there is no previous node.
Given: head → 5 → 8 → 2 → 1 → null. Find the element at index k = 3 (0-indexed).
Start:
curr = head (value 5)
i = 0
We need to move forward while i < 3.
Step 1: i = 0 < 3 ⇒ curr = curr.next (value 8), i = 1
Step 2: i = 1 < 3 ⇒ curr = curr.next (value 2), i = 2
Step 3: i = 2 < 3 ⇒ curr = curr.next (value 1), i = 3
Now i == 3, stop. curr points to the node at index 3.
Return curr.value = 1.
Cost explanation: in the worst case, k is near n-1, so you take about n steps. That’s why indexed access in a linked list is O(n).
Insight: Linked lists don’t store an “index → address” map. The only way to reach deeper nodes is to follow next repeatedly from head.
A singly linked list is a chain of nodes; each node stores a value and a next pointer/reference.
head is the entry point; an empty list has head = null.
The list ends when a node’s next is null (the null terminator).
Traversal (following next) is fundamental and costs O(n) in the number of nodes visited.
Insert/delete at the head is O(1) because it changes only a constant number of pointers.
Insert/delete after a known node is O(1); deleting/searching by value is usually O(n) because you must find the spot first.
Pointer update order matters: set new.next before redirecting old.next to avoid losing the remainder of the list.
Linked lists naturally implement stacks (head as top) and queues (head/tail as ends).
Forgetting the head special case when deleting: if the target is at head, you must update head (there is no prev).
Updating pointers in the wrong order during insertion, accidentally disconnecting the rest of the list.
Null dereference: accessing curr.next when curr is null (always check termination conditions).
Assuming linked lists have O(1) indexing like arrays; reaching the k-th element requires O(k) traversal.
Write pseudocode to insert a new value x at the front of a singly linked list. Use head and next. What is the time complexity?
Hint: Create a node X. Point X.next to the current head, then move head to X.
Pseudocode:
Time complexity: O(1) because it does a constant amount of work.
Given head → 1 → 2 → 3 → 4 → null, delete the node with value 3 (assume it exists). Show the key pointer update(s).
Hint: You must traverse until curr is 3, keeping prev as the node before curr.
Traversal:
At the moment curr is node(3), prev is node(2).
Delete by bypassing:
So node(2).next now points to node(4).
Result: head → 1 → 2 → 4 → null.
Runtime: O(n) due to traversal.
You maintain both head and tail pointers for a singly linked list. Describe how to enqueue (append) a value x in O(1) time, including how tail changes in the empty-list case.
Hint: If the list is empty, head and tail should both point to the new node. Otherwise, link tail.next to the new node and move tail.
Let X = new Node(x), with X.next = null.
Case 1: empty list (head == null):
Case 2: non-empty:
This is O(1) because it updates a constant number of pointers.