FIFO structure. Enqueue and dequeue operations.
Self-serve tutorial - low prerequisites, straightforward concepts.
When tasks must be handled in the exact order they arrive—customers in a line, print jobs, network packets, or events in a UI—queues give you a simple rule that prevents “cutting”: first in, first out.
A queue is a FIFO data structure: you enqueue(Q, x) at the rear and dequeue(Q) from the front. With the right implementation (linked list with head+tail, or a circular array), both operations can be O(1).
Many problems are not about finding the best element—they’re about preserving arrival order. If items must be processed in the same order they were received, you want a structure that makes “the oldest item goes next” the default behavior.
That rule is First-In, First-Out (FIFO):
A queue models a real-world line: you join the back, and you leave from the front.
A queue is an abstract data type (ADT) supporting (at minimum) these operations:
Common additional operations:
If a queue currently stores elements in arrival order:
front → [a, b, c] → rear
then:
front → [a, b, c, x] → rear
This “order preservation” is the whole point.
A queue is the behavior. You can implement that behavior using:
The same FIFO rules apply regardless.
Queues are often used in performance-sensitive pipelines (scheduling, buffering, BFS). If every dequeue requires shifting many elements, the queue becomes a bottleneck. So we’ll focus on implementations where enqueue and dequeue are O(1) in the usual case.
Data structures are powerful because they restrict what you can do. That restriction creates guarantees.
For a queue, the restriction is:
This gives a guarantee:
A useful mental model is timestamps.
So if you enqueue in order x₁, x₂, x₃, then you must dequeue in the same order.
Assume we enqueue items x₁, x₂, …, xₙ.
At any time, the queue stores a suffix of that sequence (some earliest items may have been dequeued already).
So dequeue order is x₁, x₂, …, xₙ.
Let Q be a queue.
If Q is empty, dequeue(Q) has no element to return.
Different APIs handle this differently:
Regardless of API, the key idea is: dequeue requires Q to be non-empty.
Queues often get confused with stacks and deques. It helps to compare them directly:
| Structure | Insert | Remove | Order rule | Typical use |
|---|---|---|---|---|
| Stack | push at top | pop at top | LIFO | undo, recursion |
| Queue | enqueue at rear | dequeue at front | FIFO | scheduling, buffering |
| Deque | both ends | both ends | flexible | sliding windows |
The FIFO guarantee is what makes a queue ideal for fairness (first come, first served).
The ADT says “insert at rear, remove from front,” but you still have to store the elements somewhere.
A common beginner mistake is implementing a queue with a plain array and doing:
That shift is O(n) per dequeue. We want O(1).
We’ll look at two standard O(1) implementations.
You already know linked lists support O(1) insert/delete if you have a pointer to the right place.
Maintain:
Each node stores (value, next).
Front is head, rear is tail.
| Aspect | Linked-list queue |
|---|---|
| Worst-case time per op | O(1) |
| Memory overhead | higher (pointers) |
| Cache friendliness | worse (non-contiguous) |
| Simplicity | very straightforward |
Arrays are contiguous and cache-friendly, but we must avoid shifting.
Instead of physically moving elements, keep two indices:
Indices “wrap around” using modulo arithmetic.
Let A be an array of capacity C.
Maintain:
Invariants:
To enqueue(Q, x):
1) If s == C, the buffer is full (either error or resize)
2) Write A[r] = x
3) Update r = (r + 1) mod C
4) Update s = s + 1
To dequeue(Q):
1) If s == 0, underflow
2) Read x = A[f]
3) (Optional) clear A[f]
4) Update f = (f + 1) mod C
5) Update s = s − 1
6) Return x
When r reaches C − 1, adding 1 yields C. Taking mod C maps it back to 0:
(r + 1) mod C = 0
So indices cycle through 0, 1, …, C − 1, 0, 1, …
If you only store f and r, then f == r could mean:
That’s why we track size s (or keep one slot empty as a convention).
| Aspect | Circular-array queue |
|---|---|
| Worst-case time per op | O(1) (amortized O(1) if resizing) |
| Memory overhead | low |
| Cache friendliness | good |
| Implementation pitfalls | off-by-one, full/empty logic |
If you need:
In real systems, you’ll often see both depending on constraints.
Queues appear whenever you have a mismatch between:
A queue buffers the difference, and FIFO provides a fairness policy that users and systems expect.
Imagine a single worker processing jobs:
A FIFO queue ensures earlier jobs are not starved by later jobs.
Examples:
Network packets arrive; your application consumes them.
Similar buffering exists in:
BFS explores a graph “layer by layer.”
The reason a queue is the right tool:
Even if you haven’t studied graphs yet, the pattern is:
1) enqueue the start
2) repeatedly dequeue the next node to process
3) enqueue newly discovered neighbors
FIFO is what keeps the exploration in layers.
A classic architecture:
A queue decouples them so producer and consumer can run at different speeds.
When used in real code, queue operations must define what happens when:
This matters for correctness, not just performance.
Queues are a “building block” structure: once you trust FIFO ordering and O(1) operations, you can build larger systems (schedulers, simulations, graph algorithms) with simple, predictable behavior.
Start with an empty queue Q. Perform:
1) enqueue(Q, 10)
2) enqueue(Q, 20)
3) enqueue(Q, 30)
4) dequeue(Q)
5) enqueue(Q, 40)
6) dequeue(Q)
7) dequeue(Q)
What values are returned by the dequeues, and what remains in Q at the end?
Initial state: Q = [ ] (empty)
Front = Rear (conceptually), size = 0
1) enqueue(Q, 10)
Q = [10]
Front element is 10, rear element is 10
2) enqueue(Q, 20)
Q = [10, 20]
Front is 10 (oldest), rear is 20 (newest)
3) enqueue(Q, 30)
Q = [10, 20, 30]
Front is 10, rear is 30
4) dequeue(Q)
By FIFO, return the oldest element: 10
Now Q = [20, 30]
5) enqueue(Q, 40)
Insert at rear
Q = [20, 30, 40]
6) dequeue(Q)
Return front: 20
Now Q = [30, 40]
7) dequeue(Q)
Return front: 30
Now Q = [40]
Final: dequeued values were 10, then 20, then 30
Remaining queue contents: [40] (front and rear are both 40)
Insight: FIFO is easiest to verify by labeling items as “older” vs “newer.” Enqueue never changes the order of older items; dequeue always removes the oldest remaining item.
Implement Q using an array A of capacity C = 5 with front index f, rear index r, and size s. Start empty:
f = 0, r = 0, s = 0.
Perform:
enqueue 1, enqueue 2, enqueue 3, dequeue, dequeue, enqueue 4, enqueue 5, enqueue 6.
Track (f, r, s) and the logical queue contents after each step.
Start:
A = [_, _, _, _, _]
f = 0, r = 0, s = 0
Logical Q = [ ]
enqueue 1:
Write A[r]=A[0]=1
r = (0+1) mod 5 = 1
s = 1
A = [1, _, _, _, _]
Logical Q = [1] (front at f=0)
enqueue 2:
A[1]=2
r = (1+1) mod 5 = 2
s = 2
A = [1, 2, _, _, _]
Logical Q = [1, 2]
enqueue 3:
A[2]=3
r = 3
s = 3
A = [1, 2, 3, _, _]
Logical Q = [1, 2, 3]
dequeue:
Read A[f]=A[0]=1 (return 1)
f = (0+1) mod 5 = 1
s = 2
A = [1, 2, 3, _, _] (optional clear A[0])
Logical Q = [2, 3]
dequeue:
Read A[1]=2 (return 2)
f = (1+1) mod 5 = 2
s = 1
Logical Q = [3]
enqueue 4:
Write A[r]=A[3]=4
r = (3+1) mod 5 = 4
s = 2
A = [1, 2, 3, 4, _]
Logical Q = [3, 4] (front at index 2)
enqueue 5:
Write A[4]=5
r = (4+1) mod 5 = 0 (wrap around)
s = 3
A = [1, 2, 3, 4, 5]
Logical Q = [3, 4, 5]
enqueue 6:
Write A[0]=6 (this slot is free logically because f=2 and s=3)
r = (0+1) mod 5 = 1
s = 4
A = [6, 2, 3, 4, 5]
Logical Q reading from f=2 for s=4 items:
Indices: 2,3,4,0 → values [3,4,5,6]
Insight: A ring buffer reuses array slots by moving indices, not elements. The physical array may look “out of order,” but the logical order is defined by (f, s) and modular indexing.
You implement a queue using a singly linked list with head and tail pointers. Show what happens to head and tail when you:
1) enqueue(Q, 'A') into an empty queue
2) dequeue(Q)
Focus on the pointer updates and the special empty-case handling.
Initial state:
head = null
tail = null
Q is empty
1) enqueue(Q, 'A'):
Create node n with n.value='A', n.next=null
Because tail is null (empty queue), set:
head = n
tail = n
Now head and tail both point to the same single node
Queue state:
head → ['A'] → null
↑
tail
2) dequeue(Q):
Check head != null (ok)
Return value = head.value = 'A'
Move head = head.next = null
Now the queue is empty again, so also set tail = null
Final state:
head = null
tail = null
Returned 'A'
Insight: In a linked-list queue, the only tricky case is transitioning to or from empty. When the last element is removed, you must update both head and tail to null.
A queue is an ADT defined by FIFO ordering: the earliest enqueued element is the earliest dequeued element.
enqueue(Q, x) inserts at the rear; dequeue(Q) removes and returns the front.
A naive array queue that shifts elements on dequeue costs O(n) per dequeue and should be avoided.
A linked list with both head and tail pointers supports enqueue and dequeue in O(1).
A circular array (ring buffer) supports enqueue and dequeue in O(1) using indices and modulo arithmetic.
To avoid ambiguity between empty and full in an array queue, track size s (or use a “one empty slot” convention).
Queues naturally model fairness and buffering in real systems: schedulers, I/O, networking, and BFS.
Implementing dequeue on an array by removing index 0 and shifting everything left (O(n) per operation).
Forgetting the empty-case updates: after removing the last node in a linked-list queue, not setting tail = null.
In a circular array, confusing the meaning of front and rear indices (e.g., rear pointing at last element instead of next write position).
Not defining behavior for underflow (dequeue on empty) or overflow (enqueue on full) in a bounded queue.
You have Q initially empty. Perform: enqueue 5, enqueue 7, dequeue, enqueue 9, dequeue, enqueue 11. What are the dequeued values and what is left in Q (front→rear)?
Hint: Write the queue contents after each operation. Dequeue always removes the current front.
Start Q=[]
enq 5 → [5]
enq 7 → [5,7]
deq → returns 5, Q=[7]
enq 9 → [7,9]
deq → returns 7, Q=[9]
enq 11 → [9,11]
Dequeued values: 5, 7
Remaining Q (front→rear): [9, 11]
Circular array queue with capacity C=4. Start f=0, r=0, s=0. Do: enqueue A, enqueue B, enqueue C, dequeue, enqueue D, enqueue E. Give final (f, r, s) and the logical queue order. Assume enqueue on full is not allowed—check whether it becomes full.
Hint: Update r on enqueue, f on dequeue, and s on both. Wrap with mod 4. Full means s=C.
Start f=0,r=0,s=0
Enq A: write at r=0, r=1, s=1
Enq B: write at r=1, r=2, s=2
Enq C: write at r=2, r=3, s=3
Deq: return at f=0, f=1, s=2
Enq D: write at r=3, r=(3+1) mod 4=0, s=3
Enq E: write at r=0, r=1, s=4 (now full)
Final: f=1, r=1, s=4
Logical order from f=1 for 4 items: indices 1,2,3,0 → [B, C, D, E]
Design question: You need a queue for a high-throughput system where memory allocations are expensive and you know an upper bound of 1,000,000 elements. Would you choose a linked-list queue or a circular-array queue? Justify in terms of time and space.
Hint: Think about pointer overhead and per-operation allocations vs a fixed contiguous buffer.
A circular-array queue is a strong choice here. Because you know a hard upper bound, you can allocate an array of capacity 1,000,000 once, avoiding per-enqueue node allocations. Enqueue/dequeue remain O(1) via index updates, and memory overhead is low (no next pointers). A linked-list queue would also have O(1) operations, but it requires allocating a node per element (allocation overhead) and adds pointer memory overhead plus worse cache locality.