Contiguous memory storing elements by index. O(1) access, O(n) search.
Self-serve tutorial - low prerequisites, straightforward concepts.
You’re writing a tiny image editor. The user clicks at pixel (x=120, y=45) and expects the color to change instantly—no waiting, no “searching” through the whole image. The reason this can feel instantaneous is that the pixels are typically stored in an array (or something array-like), so the program can jump straight to “pixel #k” in O(1) time.
Curiosity challenge (keep this in mind as you read): If arrays are so fast, why is inserting a new element at the very front of an array usually slow?
An array stores elements in a contiguous block of memory and uses an integer index i to access A[i] in constant time by computing its address. This makes reading/writing by index O(1), but operations that require shifting many elements (like inserting at the front) take O(n).
An array is a data structure that stores a fixed number of elements next to each other in memory (contiguously), and lets you refer to each element using an integer index.
If we name an array A, then:
That simple indexing idea—A[i]—is the signature move of arrays.
Arrays are foundational because they give you something incredibly powerful:
Many other data structures either build on arrays (stacks, heaps) or are compared against arrays (linked lists, hash tables).
Imagine a hallway of lockers, each locker the same size, in a single line. Locker #0, #1, #2, … Each locker holds exactly one value.
If you want locker #120, you don’t start at #0 and walk 120 steps checking each locker. You go straight there because the lockers are in order.
That is the feeling arrays provide: jumping directly to a position.
“Contiguous” means the elements are stored back-to-back in memory addresses:
basebase + elementSizebase + 2*elementSizeThis layout is the entire reason arrays can do O(1) indexing.
Indices are integers. Common conventions:
Important: indices must be in range.
If the array length is n, valid indices are typically 0…n−1.
Accessing outside that range is called out-of-bounds. Some languages catch it with an error; lower-level languages might do something worse (like reading random memory).
In everyday programming, you’ll see:
In this lesson, the core mechanics are the same: contiguous storage + integer index.
When you write A[i], the computer doesn’t “look for” the i-th element.
Instead, it computes the memory address directly.
Let:
base(A) be the memory address where A[0] startss be the size (in bytes) of one element (e.g., 4 bytes for a 32-bit integer)i be the indexThen the address of A[i] is:
That’s just multiplication and addition—constant-time arithmetic—so indexing is O(1).
This formula relies on each element having the same size.
That’s why classic arrays store uniform element sizes.
Once the address is computed:
So arrays are excellent for:
Suppose you have a 2D grid with rows and cols. Many systems store it as a 1D array in row-major order.
Let grid[y][x] be the cell at row y, column x.
Then the 1D index is:
So:
This is the same O(1) indexing idea, just with an extra step to compute i.
Because arrays are contiguous, iterating through them tends to be fast in practice.
Modern CPUs fetch memory in chunks (cache lines). If you access A[0], A[1], A[2], … sequentially, the CPU often already has the next values nearby.
So arrays are not just theoretically efficient—they’re often practically fast.
Arrays are fast at one thing: index-based access.
But many tasks aren’t “give me A[i]”. They’re:
This is where the tradeoffs show up.
If the array is unsorted and you want to find a particular value x, you usually must scan:
In the worst case, x is at the end or not present, so you check n elements.
That’s O(n).
If the array is sorted, you can use binary search, which is O(log n).
But sorting has its own costs (and maintaining sorted order makes insertions harder). This lesson’s baseline claim “O(n) search” assumes no additional structure.
Recall the question: Why is inserting at the front slow?
Consider inserting a new element at index 0.
If you need to keep the existing order, everything must shift right:
That’s potentially moving n elements, so insertion at the front is O(n).
If you insert at index k, then elements k…n−1 shift right.
Worst case k=0 → shift all n.
So insertion is O(n) in the worst case.
If you insert at the end (append), you might not need to shift anything.
We’ll keep the basic mental model: order-preserving insertions generally require shifting → O(n).
Deleting A[k] while preserving order means:
Worst case: delete near the front → shift almost all elements.
So deletion is typically O(n).
| Operation (unsorted array) | Typical Time | Why |
|---|---|---|
| Read A[i] | O(1) | direct address computation |
| Write A[i] | O(1) | direct address computation |
| Search for value x | O(n) | may need full scan |
| Insert at front/middle | O(n) | shift elements to make room |
| Delete at front/middle | O(n) | shift elements to close gap |
| Iterate through all elements | O(n) | visit each element once |
Arrays store elements with little overhead:
This compactness is why arrays are often used as the underlying storage for many other structures.
Arrays show up everywhere because they provide:
1) fast indexing
2) cache-friendly iteration
3) simple memory layout
Here are key connections to what you’ll learn next.
A stack (LIFO) often uses a dynamic array underneath:
Most operations become O(1) because you only modify the end.
A queue (FIFO) can also be built on an array using a circular buffer:
This is an important trick: arrays are bad at removing from the front if you shift, but great if you redesign the indexing.
A binary heap is typically stored in an array.
For 0-based indexing:
This is a beautiful example of arrays enabling a tree-like structure without pointers.
Many hash table designs use arrays internally:
The hash function maps a key to an integer index.
That index is used to jump to a location quickly (again, array indexing).
Graphs can be represented with an adjacency matrix, which is basically a 2D array:
Arrays make that matrix representation possible and efficient to access.
Arrays struggle when you need lots of:
In those cases, other structures (linked lists, balanced trees) can help—at the cost of losing O(1) random access.
Arrays are not “best.” They are specific.
And that specificity—fast direct access—makes them the foundation for much of practical computing.
Suppose an integer array A starts at base address 1000 (in bytes). Each integer takes s = 4 bytes. Compute the address of A[7], and explain why this does not depend on the array length n.
Write the indexing address formula:
Substitute the known values: base(A)=1000, i=7, s=4:
Compute the multiplication:
Add to the base:
Reason about runtime: this required a constant number of arithmetic operations (one multiply, one add), regardless of whether the array has 10 elements or 10 million.
Insight: Array indexing is fast because it’s address computation, not searching. The computer “jumps” directly to the right memory location.
You have an array A of length 5: A = [10, 20, 30, 40, 50]. You want to insert 5 at index 0 and keep the order of existing elements.
Identify what must happen to preserve order: every existing element must move one position to the right.
Show the shift from the back to avoid overwriting values:
Now index 0 is free. Write the new value:
Resulting array:
A = [5, 10, 20, 30, 40, 50]
Count the work: you performed 5 shifts (proportional to n). In general, inserting at the front may require shifting n elements → O(n).
Insight: The array’s contiguity is both the superpower (O(1) indexing) and the limitation (order-preserving inserts/deletes force shifting many elements).
You have a grid with rows = 3 and cols = 4. It’s stored as a 1D array data[] of length 12 in row-major order. Find the 1D index for the cell at (y=2, x=1), and describe the access.
Use the row-major mapping from (y, x) to 1D index i:
Substitute y=2, cols=4, x=1:
Compute:
So grid[2][1] is stored at data[9]. Access is O(1) because it becomes a single array index operation.
Insight: Many “2D arrays” are implemented as 1D arrays with an indexing formula. Arrays plus arithmetic can represent rich structures efficiently.
An array stores elements contiguously in memory, enabling fast sequential and random access patterns.
The core operation A[i] is O(1) because the address is computed as base(A) + i·elementSize.
Arrays excel when you frequently read/write by index (tables, pixel buffers, sample arrays, grids).
Unsorted search is typically O(n) because you may need to scan every element.
Order-preserving insertion/deletion in the front or middle is O(n) due to shifting elements.
Appending to a dynamic array is often amortized O(1), but occasional resizing can be costly.
Arrays are commonly used as the underlying storage for stacks, heaps, hash tables, and adjacency matrices.
Assuming arrays are fast for every operation: O(1) access does not mean O(1) insertion or deletion in the middle.
Off-by-one indexing errors (especially mixing 0-based and 1-based thinking).
Out-of-bounds access: using an index < 0 or ≥ n (can crash or corrupt memory in some languages).
Confusing “search by value” with “access by index”: A[i] is O(1), but finding which i contains x is often O(n).
An array B starts at base address 2000. Each element is 8 bytes. What is the address of B[13]?
Hint: Use addr(B[i]) = base(B) + i·s.
Using $$ with base=2000, i=13, s=8:
You have an array of length n. What is the worst-case time complexity to delete the element at index 0 while preserving order? Explain in one or two sentences.
Hint: Think about what happens to all remaining elements after the deletion.
O(n). After removing index 0, every element A[1]…A[n−1] must shift one position left to fill the gap, which is proportional to n.
A 5×6 grid is stored in a 1D array in row-major order. What index i corresponds to (y=3, x=4)? Also, what is the range of valid indices?
Hint: Row-major uses i = y·cols + x. The total length is rows·cols.
cols=6, so $$ The total length is 5·6=30, so valid indices are 0…29.