Bubble sort, insertion sort, selection sort. O(n^2) algorithms.
Deep-dive lesson - accessible entry point but dense material. Use worked examples and spaced repetition.
Sorting is where “algorithm thinking” becomes tangible: you can watch order emerge from repeated, tiny decisions—compare, swap, shift—until the whole array becomes predictable to scan, search, and reason about.
Basic O(n²) sorting algorithms repeatedly compare elements and move them into place. Bubble sort grows a sorted region by swapping adjacent out-of-order pairs; selection sort grows it by repeatedly selecting the minimum; insertion sort grows it by inserting the next element into an already-sorted prefix via shifts. All rely on clear loop invariants and simple in-place operations like swap(i,j).
Sorting is the simplest setting where you practice three core algorithm skills:
Even if you later use faster algorithms (merge sort, quicksort, heapsort), these basics give you intuition for:
You have an array A of length n with elements that can be compared. A sorting algorithm rearranges the elements so that:
A[0] ≤ A[1] ≤ … ≤ A[n−1]
We’ll assume ascending order and a comparison operator “≤”. The learner already knows arrays and Big-O; here we focus on mechanics.
We’ll use the key symbol:
This is the primitive “relocate in-place” tool in bubble and selection sort.
Insertion sort typically uses shifts (copying an element one position right) and then writes one saved element into a gap. Shifting is still in-place, but it’s not symmetric like swap.
All three algorithms do “many” comparisons. In the average/worst case they do on the order of:
∑ from k=1 to n of k ≈ n(n+1)/2 ≈ n²/2
So time grows roughly with n². Doubling n makes work about 4×.
If elements have equal keys (say, two records with the same score), a stable sort preserves their original relative order.
Stability matters when sorting by multiple keys (e.g., sort by last name, then stable-sort by first name).
Each algorithm maintains a loop invariant describing a region that is already sorted and in final position:
That invariant is your correctness anchor: if the region is correct and it grows until it covers the whole array, the algorithm works.
Sorting is not only about comparing; it’s about how you move elements once you know something is out of place.
Two common strategies:
Given indices i and j:
That’s 3 assignments (writes) for one swap.
Suppose you want to insert a key into a sorted prefix A[0..i−1]. You:
This can move key far left with many shifts, but avoids repeated swaps.
When analyzing O(n²) sorts, it helps to separate:
Why? Two algorithms can both be O(n²) but behave differently in practice.
| Algorithm | Main movement | Comparisons (worst) | Writes (typical) | Stable? |
|---|---|---|---|---|
| Bubble | adjacent swaps | ≈ n²/2 | high (many swaps) | yes |
| Selection | swap min forward | ≈ n²/2 | low (≤ n swaps) | no |
| Insertion | shifts then insert | ≈ n²/2 | moderate; good on nearly sorted | yes |
All three can be proven correct by a growing sorted region. But performance depends on how much disruption the remaining unsorted region causes.
A sorting algorithm is repetitive. Without a clear invariant, it’s easy to get lost in indices and off-by-one errors.
A loop invariant is a statement that is true:
1) before an iteration,
2) preserved by the iteration,
3) and strong enough that when the loop ends, it implies the array is sorted.
We’ll state a clean invariant for each algorithm.
Repeatedly compare adjacent elements and swap if out of order. This “bubbles” large elements to the right.
After pass p (0-based), the last p elements are in sorted order and are the p largest elements of the array.
So the sorted region is a suffix that grows from the right.
For i from 0 to n−2:
Repeat for shorter and shorter ranges.
During a single pass, the maximum element in the unsorted prefix moves right one step at a time until it reaches the end of that prefix. So after one full pass, the maximum is in its final place at the end. Repeat.
Track whether any swap happened during a pass.
Repeatedly select the minimum element from the unsorted region and swap it into the next position in the front.
After iteration k, the first k elements are the k smallest elements of the array, in sorted order.
So the sorted region is a prefix that grows from the left.
For pos from 0 to n−1:
At each step, you place the smallest remaining element into position pos, which is exactly where it belongs in the fully sorted array.
If there are equal keys, selecting a minimum far to the right and swapping forward can leapfrog equal elements.
Grow a sorted prefix one element at a time by inserting the next element into the correct position in the prefix.
Before processing index i, the prefix A[0..i−1] is sorted.
Then you take A[i] (call it key) and insert it so that A[0..i] becomes sorted.
For i from 1 to n−1:
Because A[0..i−1] is sorted, once you find where key belongs (the first position where A[j] ≤ key), everything to the right can be shifted by 1 and key placed in the gap.
If the array is already sorted, the inner while condition fails immediately each time:
So insertion sort is fast on nearly sorted data.
All three do nested loops. In the worst case:
Comparisons ≈ ∑ from m=1 to n−1 of m
Compute:
∑ m = 1 + 2 + … + (n−1)
= (n−1)n/2
≈ n²/2
So worst-case time is O(n²).
Writes differ:
Even though O(n²) sounds “slow,” these algorithms show up in practice because:
A common pattern: a fast O(n log n) algorithm partitions the problem, and for small subarrays, it switches to insertion sort because constant factors dominate.
| Situation | Best pick | Why |
|---|---|---|
| Very small arrays | Insertion | low overhead, good cache behavior |
| Nearly sorted input | Insertion | few shifts; close to O(n) |
| Want minimal writes (flash / expensive writes) | Selection | O(n) swaps |
| Need stable sorting by multiple keys | Insertion or Bubble | stability preserved |
| Teaching / visualizing adjacent swaps | Bubble | easiest to “see” |
When your sort fails, invariants tell you where to look.
You can literally assert these conditions in code while learning.
These algorithms build intuition for:
If you understand how a sorted region grows and why each move is safe, you’re ready to learn faster sorts without treating them as magic.
Sort A = [5, 1, 4, 2, 8] in ascending order using bubble sort. Show passes and swaps(i,i+1).
Pass 1 (compare adjacent through index 0..3):
A = [1, 5, 4, 2, 8]
A = [1, 4, 5, 2, 8]
A = [1, 4, 2, 5, 8]
End pass 1: last element is the maximum of the array → suffix [8] is sorted and final.
Pass 2 (compare adjacent through index 0..2):
A = [1, 2, 4, 5, 8]
End pass 2: suffix [5, 8] is sorted and final.
Pass 3 (through index 0..1):
No swaps occurred in this pass ⇒ array is sorted; can stop early.
Insight: Bubble sort’s core correctness story is: one full pass guarantees the current maximum reaches its final position at the end, so the sorted suffix grows by 1 each pass. The “no swaps” optimization turns best case into O(n).
Sort A = [7, 3, 5, 2] using insertion sort. Show shifts and the loop invariant that A[0..i−1] is sorted.
Start: A = [7, 3, 5, 2]
Invariant before i=1: prefix A[0..0] = [7] is sorted.
i = 1, key = 3, j = 0
Check A[j] > key? A[0]=7 > 3 yes ⇒ shift:
Insert key at j+1 = 0:
Now A[0..1] = [3,7] is sorted.
i = 2, key = 5, j = 1
A[1]=7 > 5 yes ⇒ shift:
A[0]=3 > 5? no ⇒ stop shifting
Insert key at j+1=1:
Now A[0..2] = [3,5,7] is sorted.
i = 3, key = 2, j = 2
A[2]=7 > 2 ⇒ shift → A = [3, 5, 7, 7]
A[1]=5 > 2 ⇒ shift → A = [3, 5, 5, 7]
A[0]=3 > 2 ⇒ shift → A = [3, 3, 5, 7]
j becomes -1
Insert key at 0:
A = [2, 3, 5, 7]
Now entire array sorted.
Insight: Insertion sort is “sorting by maintaining a promise”: before each insertion, the left prefix is sorted. Shifts preserve that order, and inserting key in the first valid position restores the invariant for a larger prefix.
Sort A = [4, 1, 3, 1] using selection sort. Show each min selection and swap(pos, minIdx).
pos = 0: scan A[0..3] = [4,1,3,1]
Minimum is 1 at minIdx = 1 (or 3; standard implementation picks first min found).
swap(0,1) ⇒ A = [1,4,3,1]
Invariant: A[0] is the smallest element and fixed.
pos = 1: scan A[1..3] = [4,3,1]
Minimum is 1 at minIdx = 3
swap(1,3) ⇒ A = [1,1,3,4]
Invariant: A[0..1] are the 2 smallest elements sorted.
pos = 2: scan A[2..3] = [3,4]
Minimum is 3 at minIdx = 2
swap(2,2) does nothing ⇒ A unchanged.
Array sorted.
Insight: Selection sort minimizes swaps (≤ n), but it still performs a full min-scan each step. Also notice how swapping can move an element far—this is why selection sort is usually not stable.
Sorting means rearranging array elements so A[0] ≤ A[1] ≤ … ≤ A[n−1].
All three basic sorts are built from local comparisons plus in-place relocation via swap(i,j) or shifts.
A loop invariant (a growing sorted prefix/suffix) is the cleanest way to reason about correctness.
Bubble sort grows a sorted suffix by repeated adjacent swaps; an early-exit flag gives O(n) best case.
Selection sort grows a sorted prefix by selecting the minimum from the unsorted region; it uses few swaps but is typically not stable.
Insertion sort grows a sorted prefix by inserting the next element into position using shifts; it is fast on nearly sorted data and stable.
Worst-case comparisons for all three are about (n−1)n/2 ≈ n²/2 ⇒ O(n²).
Algorithm choice can depend on stability needs, write cost, and how sorted the input already is.
Off-by-one loop bounds (e.g., bubble inner loop going to n−1 instead of stopping at the unsorted boundary).
Breaking stability accidentally (e.g., in bubble sort swapping when A[i] ≥ A[i+1] instead of strictly A[i] > A[i+1]).
In insertion sort, forgetting to save key before shifting, which overwrites the value you’re trying to insert.
In selection sort, scanning the wrong range (should start at pos, not 0), which can undo earlier work.
Run one full pass of bubble sort on A = [3, 2, 2, 1]. Show the array after the pass and state what you know is true about the last element.
Hint: Bubble pass compares (0,1), (1,2), (2,3). Swap only when left > right.
Start: [3,2,2,1]
Compare 3 and 2 → swap ⇒ [2,3,2,1]
Compare 3 and 2 → swap ⇒ [2,2,3,1]
Compare 3 and 1 → swap ⇒ [2,2,1,3]
After one full pass, the last element (3) is the maximum and is in its final position.
Use insertion sort logic to insert key = 6 into the sorted prefix [2, 4, 5, 7] (think of it as A[0..3] sorted and key is A[4]). Show shifts and the final array.
Hint: Shift elements that are > 6 one position right until you find an element ≤ 6.
Prefix sorted: [2,4,5,7], key=6
Start j at index of 7. Since 7 > 6, shift it right:
[2,4,5,7,7]
Now compare 5 to 6: 5 > 6? no, stop.
Insert key at position after 5:
[2,4,5,6,7]
Selection sort does (n−1) + (n−2) + … + 1 comparisons in the worst case. Derive a closed form and show it is O(n²).
Hint: Use the arithmetic series sum 1 + 2 + … + (n−1) = (n−1)n/2.
Worst-case comparisons:
C(n) = (n−1) + (n−2) + … + 1
Rewrite in increasing order:
C(n) = 1 + 2 + … + (n−1)
Use arithmetic series formula:
C(n) = (n−1)n/2
Asymptotically, (n−1)n/2 = (n² − n)/2 ≈ n²/2, so C(n) ∈ O(n²).