Scalar value from square matrix. Zero iff singular.
Deep-dive lesson - accessible entry point but dense material. Use worked examples and spaced repetition.
A square matrix doesn’t just move vectors around—it can stretch space, flip orientation, or collapse an entire dimension. The determinant is the single number that summarizes all of that.
det(A) is a scalar attached to a square matrix A that measures signed volume scaling. It is 0 exactly when A collapses space (is singular). It is multiplicative: det(AB) = det(A)det(B), and changes predictably under row operations.
When you treat a matrix A as a linear transformation, it takes shapes in ℝⁿ and maps them somewhere else. A crucial question is:
The determinant packages these geometric behaviors into a single scalar.
For an n×n matrix A, the determinant det(A) is a real number with three core behaviors (these can be taken as axioms that uniquely characterize it):
1) Multilinear in rows (or columns)
2) Alternating
3) Normalization
These properties are not random: they are exactly what you’d want from a “signed volume scaling factor.”
Interpret A as acting on vectors in ℝⁿ. Consider the unit square (in 2D) or unit cube (in 3D), or in general the unit n-dimensional parallelepiped.
A matrix A is singular (non-invertible) exactly when det(A) = 0.
Intuition: if A collapses at least one dimension, the n-dimensional volume must collapse to 0.
Formally we’ll connect this to row dependence:
For A = [[a, b], [c, d]],
det(A) = ad − bc.
This number is the signed area scaling factor in 2D.
Takeaway: determinants are not merely a computational trick—they are the algebraic encoding of how linear maps scale and orient space.
In practice, you often compute determinants by simplifying a matrix via row operations (the same moves used in Gaussian elimination). The determinant is designed to react predictably to these moves.
This gives you a powerful workflow:
1) Reduce A to an upper triangular form U.
2) Adjust for the row operations you used.
3) Use det(U) = product of diagonal entries.
Below is the rule set you should memorize—because it turns determinant computation into bookkeeping.
| Row operation on A | Effect on det(A) | Why (intuition) |
|---|---|---|
| Swap two rows | det changes sign | Alternating: swapping flips orientation |
| Multiply a row by k | det multiplies by k | Multilinear: scaling one row scales volume |
| Add k·(row i) to row j | det unchanged | Shear: doesn’t change volume |
A subtle but important note: “Add a multiple of one row to another” keeps determinant unchanged, but it can drastically change entries—this is the move that makes elimination so useful.
For an upper triangular matrix U (everything below diagonal is 0),
det(U) = ∏ᵢ uᵢᵢ.
Same for lower triangular.
Why this is true (idea): a triangular matrix maps basis vectors in a way that stacks scaling along diagonal directions without mixing that creates additional volume contributions.
To see why some rules make sense, consider two quick consequences of the axioms:
(1) If two rows are equal, det = 0
Swap the two equal rows:
So:
det(A) = −det(A)
⇒ 2 det(A) = 0
⇒ det(A) = 0.
(2) Adding a multiple of one row to another doesn’t change det
Let rows be r₁, …, rⱼ, …, rᵢ, …, rₙ. Replace rⱼ with rⱼ + k rᵢ.
By multilinearity in row j:
det(r₁,…, rⱼ + k rᵢ, …, rₙ)
= det(r₁,…, rⱼ, …, rₙ) + k det(r₁,…, rᵢ, …, rₙ)
But the second determinant has rᵢ appearing twice (once as row i and once in row j’s slot). Two equal rows ⇒ determinant 0. Hence:
det(new) = det(old) + k·0 = det(old).
This is the algebraic reason elimination works so cleanly.
A common reliable method:
At the end, you reach triangular U, then use product of diagonal.
Gaussian elimination tells you:
Determinant encodes this in one number:
So: elimination gives you a procedural test; determinant gives you a scalar certificate.
The axioms define det(A), but to compute it directly from entries, we need formulas.
For 2×2 and 3×3, explicit closed forms are convenient. For larger n, direct formulas exist (cofactor expansion / Leibniz formula), but computationally they become expensive; elimination is typically preferred.
For A = [[a, b], [c, d]]:
det(A) = ad − bc.
Geometric sanity checks:
Let
A =
[ [a, b, c],
[d, e, f],
[g, h, i] ].
One common expression is:
det(A) = a(ei − fh) − b(di − fg) + c(dh − eg).
Notice the alternating + − + pattern, which comes from the “alternating” nature of det.
To expand det(A) along row r (or column c), you use minors and cofactors.
Cᵣc = (−1)^(r+c) det(Mᵣc).
Then cofactor expansion along row r is:
det(A) = ∑_{c=1 to n} aᵣc Cᵣc.
Similarly, expansion along column c:
det(A) = ∑_{r=1 to n} aᵣc Cᵣc.
Swapping rows/columns changes sign. When you “move” an element aᵣc to the top-left conceptually, you perform (r−1) row swaps and (c−1) column swaps, totaling (r+c−2) swaps. Parity of swaps controls the sign, producing the checkerboard pattern.
Cofactor expansion is rarely the fastest general method, but it shines when:
A practical rule:
There is also the Leibniz formula:
det(A) = ∑_{σ ∈ Sₙ} sgn(σ) ∏_{i=1 to n} a_{i, σ(i)}.
This makes the “alternating” behavior explicit: each permutation contributes with a sign depending on whether it’s an even or odd permutation.
Why mention it?
But computing with it directly costs O(n!) terms—prohibitively large.
A major theorem:
det(AB) = det(A) det(B).
Interpretation:
A powerful corollary:
Because if A is invertible, AA⁻¹ = I.
Take determinants:
det(A) det(A⁻¹) = det(I) = 1
⇒ det(A) ≠ 0.
And conversely, if det(A) ≠ 0, A must be invertible (can be shown via elimination / rank).
Another helpful identity:
det(Aᵀ) = det(A).
So you can work with rows or columns interchangeably.
In applied linear algebra, det(A) answers fast questions:
Important nuance: “det close to 0” is not always a reliable numerical indicator of ill-conditioning by itself (scaling matters), but conceptually it’s a key signal.
If λ₁, …, λₙ are the eigenvalues of A (counted with algebraic multiplicity), then:
det(A) = ∏ᵢ λᵢ.
Why this matters:
A related identity uses the characteristic polynomial:
p(λ) = det(A − λI).
Roots of p(λ) are eigenvalues. Determinants are literally the engine behind the eigenvalue equation.
Many matrix decompositions expose the determinant cheaply.
LU decomposition (with possible permutation P):
PA = LU
where L is lower triangular with 1s on diagonal (often), and U is upper triangular.
Taking det:
det(PA) = det(L) det(U)
det(P) det(A) = det(L) det(U)
So:
det(A) = det(P)⁻¹ det(L) det(U).
Since det(L) is usually 1 (if L has unit diagonal), and det(U) is product of U’s diagonal entries, determinants become almost free once LU is computed.
QR decomposition:
A = QR
where Q is orthonormal (QᵀQ = I) and R is upper triangular.
Then:
det(A) = det(Q) det(R).
Key facts:
If a differentiable map has Jacobian matrix J at a point, then |det(J)| gives the local volume scaling factor. This is why determinants appear in integrals under substitution.
Even if you haven’t studied Jacobians yet, keep the mental model:
There is a formula for solving Ax = b using determinants:
xᵢ = det(Aᵢ) / det(A)
where Aᵢ is A with column i replaced by b.
This is elegant theoretically, but computationally inefficient for large systems. Still, it reinforces:
Determinants sit at a crossroads:
Once you can compute and reason about det(A), the next nodes—eigenvalues/eigenvectors and decompositions—feel much more motivated and connected.
Compute det(A) for A = [[2, 1, 0], [4, 3, 1], [−2, 0, 5]]. Use row operations to reach an upper triangular matrix.
Start with
A =
[ [ 2, 1, 0 ],
[ 4, 3, 1 ],
[ −2, 0, 5 ] ]
Use row-addition operations (do not change determinant):
R₂ ← R₂ − 2R₁
R₃ ← R₃ + R₁
Compute the new rows:
R₂ = [4,3,1] − 2[2,1,0] = [0, 1, 1]
R₃ = [−2,0,5] + [2,1,0] = [0, 1, 5]
Now the matrix is
[ [2, 1, 0],
[0, 1, 1],
[0, 1, 5] ]
Eliminate below the pivot in column 2 using row-addition:
R₃ ← R₃ − R₂
Compute:
R₃ = [0,1,5] − [0,1,1] = [0,0,4]
Now we have an upper triangular matrix U:
U =
[ [2, 1, 0],
[0, 1, 1],
[0, 0, 4] ]
Since we used only row-addition operations, det(U) = det(A). For triangular matrices:
det(U) = 2 · 1 · 4 = 8
Therefore det(A) = 8.
Insight: Row-addition operations preserve det, so elimination can compute det(A) with minimal bookkeeping. Once triangular, the determinant is just the product of the diagonal.
Let B = [[1, 2, 3], [2, 4, 6], [0, 1, 1]]. Decide if B is invertible by computing det(B) efficiently.
Observe the first two rows:
R₂ = 2R₁.
This is immediate linear dependence.
Because the determinant is alternating and multilinear, if two rows are linearly dependent then det(B) = 0.
(Reason: scaling one row scales det; if R₂ is a multiple of R₁ then the volume collapses.)
Conclude det(B) = 0 without further computation.
Therefore B is singular (non-invertible).
Insight: You often don’t need full computation: spotting dependent rows/columns gives det = 0 immediately, which is exactly the “collapse of volume” idea.
Compute det(C) for C = [[3, 0, 2], [0, 1, 0], [4, 0, 5]] using cofactor expansion.
Choose the second row for expansion because it has many zeros: [0, 1, 0].
Cofactor expansion along row 2:
det(C) = ∑_{c=1..3} c₂c C₂c
Only the middle term survives because c₂1 = 0 and c₂3 = 0:
det(C) = 1 · C₂2
Compute the cofactor:
C₂2 = (−1)^(2+2) det(M₂2) = (+1) det(M₂2)
Form the minor M₂2 by deleting row 2 and column 2:
M₂2 = [[3, 2], [4, 5]]
Compute its determinant:
det(M₂2) = 3·5 − 2·4 = 15 − 8 = 7
Therefore det(C) = 7.
Insight: Cofactor expansion is best used strategically: pick a row/column with many zeros to reduce the amount of work.
det(A) is a scalar for square matrices that measures signed volume scaling of the linear map x ↦ Ax.
det(A) = 0 ⇔ A is singular ⇔ rows/columns are linearly dependent ⇔ the transformation collapses dimension.
Row operations have predictable effects: swap → sign flip, scale a row by k → det scales by k, add multiple of another row → det unchanged.
For triangular matrices, det is the product of diagonal entries: det(U) = ∏ᵢ uᵢᵢ.
Cofactor expansion computes det(A) via minors and the checkerboard sign (−1)^(r+c); it’s most useful for sparse matrices.
Multiplicativity det(AB) = det(A)det(B) matches the idea that volume scalings compose by multiplication.
det(Aᵀ) = det(A), so you may reason with rows or columns interchangeably.
Forgetting determinant bookkeeping during elimination (especially row swaps, which multiply det by −1).
Using cofactor expansion on a dense 5×5 (or larger) matrix—this becomes computationally explosive compared to elimination/LU.
Thinking det(A) gives the exact condition number or stability; det near 0 suggests collapse, but numerical conditioning also depends on scaling and singular values.
Mixing up sign patterns in cofactor expansion: the (−1)^(r+c) checkerboard is easy to misapply.
Compute det(A) for A = [[1, 2], [5, 7]]. Interpret the sign.
Hint: Use det([[a,b],[c,d]]) = ad − bc.
det(A) = 1·7 − 2·5 = 7 − 10 = −3. Magnitude 3 means areas scale by 3; negative sign means orientation flips.
Compute det(B) for B = [[1, 1, 1], [2, 3, 4], [0, 1, 2]] using elimination (track any row swaps).
Hint: Use row-addition operations to make it upper triangular; then multiply the diagonal.
Start
B = [[1,1,1],[2,3,4],[0,1,2]].
R₂ ← R₂ − 2R₁ gives R₂ = [0,1,2].
Now rows 2 and 3 are equal: R₂ = [0,1,2], R₃ = [0,1,2]. Two equal rows ⇒ det(B) = 0. So B is singular.
Let A be invertible and 3×3. If det(A) = −2, compute det(5A) and det(A⁻¹).
Hint: Scaling: det(kA) = kⁿ det(A) for n×n. Inverse: det(A)det(A⁻¹) = 1.
Since A is 3×3, det(5A) = 5³ det(A) = 125 · (−2) = −250. Also det(A⁻¹) = 1/det(A) = −1/2.
Next steps:
Related reinforcement: