Vectors where no vector is a linear combination of others.
Self-serve tutorial - low prerequisites, straightforward concepts.
When you collect vectors, you’re often asking: “Do these vectors actually give me new directions, or are some of them redundant?” Linear independence is the precise way to measure redundancy.
A set of vectors {v₁,…,vₙ} is linearly independent if the only way to make the zero vector from them is the trivial linear combination: c₁v₁ + … + cₙvₙ = 0 implies c₁ = … = cₙ = 0. If there is a nontrivial solution, the set is dependent (some vector is a linear combination of the others).
In a vector space, vectors represent “directions” or “features.” But sets of vectors can contain repetition in disguise: one vector might be obtainable from the others.
Linear independence answers a basic structural question:
This matters immediately for:
Given vectors v₁,…,vₙ in a vector space V and scalars c₁,…,cₙ, a linear combination is
c₁v₁ + c₂v₂ + … + cₙvₙ.
The special case where all coefficients are zero,
0·v₁ + 0·v₂ + … + 0·vₙ = 0,
is called the trivial linear combination. It always exists, for every set of vectors.
A set of vectors {v₁,…,vₙ} is linearly independent if the only solution to
c₁v₁ + c₂v₂ + … + cₙvₙ = 0
is
c₁ = c₂ = … = cₙ = 0.
If there exists a solution where at least one coefficient is nonzero, then the set is linearly dependent.
Think of c₁v₁ + … + cₙvₙ as trying to “cancel” vectors to land exactly on 0.
A very useful equivalent statement (we’ll justify it carefully later):
{v₁,…,vₙ} is linearly dependent ⇔ at least one vector is a linear combination of the others.
This is the “redundancy detector” version: dependency means one vector can be removed without losing the span.
In ℝ²:
In ℝ³:
The definition of linear independence is written as a single equation:
c₁v₁ + … + cₙvₙ = 0.
So the most direct way to test independence is:
1) Write the equation component-wise (or as a matrix equation).
2) Solve for the scalars c₁,…,cₙ.
3) Check whether the only solution is the trivial one.
This method is universal: it works in any vector space where you can express vectors with respect to some coordinates (or where you can otherwise solve the relation).
Suppose v₁,…,vₙ are in ℝᵐ. Put them as columns of a matrix A:
A = [ v₁ v₂ … vₙ ] (an m×n matrix)
Then
c₁v₁ + … + cₙvₙ = 0
is the same as
Ac = 0,
where c = (c₁,…,cₙ) is the coefficient vector.
So:
Row reducing A does not change the solution set of Ac = 0 (it produces an equivalent system). So independence becomes a rank/pivot question:
Many learners hear “linear independence” and look for a quick geometric shortcut even when one isn’t available. The safest mental model is:
Independence is about uniqueness of coefficients in the zero relation.
If the zero vector can be produced in more than one way (i.e., with a nonzero coefficient vector), then some nontrivial cancellation exists.
These don’t replace the test, but help you predict outcomes.
1) If one vector is the zero vector
If some vᵢ = 0, then the set is dependent because
1·vᵢ = 0
is a nontrivial combination.
2) Too many vectors for the ambient dimension
In ℝᵐ, any set of more than m vectors is dependent.
Reason (informal for now, formal later with dimension): you cannot have more than m independent directions in m-dimensional space.
3) Obvious multiples
If v₂ = kv₁ for some scalar k, then
kv₁ − 1·v₂ = 0
is nontrivial ⇒ dependent.
Orthogonal nonzero vectors are always independent, but independence does not require orthogonality.
| Concept | What it constrains | Typical test |
|---|---|---|
| Linear independence | No nontrivial combination equals 0 | Solve Ac=0, pivots |
| Orthogonality | Dot products are 0 between distinct vectors | vᵢ·vⱼ = 0 |
You can have independent vectors that are not orthogonal (common in real data and features).
The definition uses all vectors simultaneously:
c₁v₁ + … + cₙvₙ = 0.
But in practice you often want a more “local” redundancy statement:
That’s exactly what the equivalence gives.
A set {v₁,…,vₙ} is linearly dependent iff at least one vector can be written as a linear combination of the others.
We’ll prove both directions with careful algebra.
Assume the set is dependent.
Then there exist scalars c₁,…,cₙ, not all zero, such that
c₁v₁ + c₂v₂ + … + cₙvₙ = 0.
Because not all coefficients are zero, pick an index k with cₖ ≠ 0.
Now isolate vₖ:
c₁v₁ + … + cₖvₖ + … + cₙvₙ = 0
cₖvₖ = −(c₁v₁ + … + cₖ₋₁vₖ₋₁ + cₖ₊₁vₖ₊₁ + … + cₙvₙ)
vₖ = −(c₁/cₖ)v₁ − … − (cₖ₋₁/cₖ)vₖ₋₁ − (cₖ₊₁/cₖ)vₖ₊₁ − … − (cₙ/cₖ)vₙ.
So vₖ is a linear combination of the other vectors. Redundancy found.
Conversely, assume some vector is a combination of the others. Say
vₖ = a₁v₁ + … + aₖ₋₁vₖ₋₁ + aₖ₊₁vₖ₊₁ + … + aₙvₙ.
Bring everything to one side:
vₖ − a₁v₁ − … − aₖ₋₁vₖ₋₁ − aₖ₊₁vₖ₊₁ − … − aₙvₙ = 0.
This is a linear combination equaling 0 with coefficient 1 on vₖ, so it’s nontrivial. Therefore the set is dependent.
1) If the set is dependent, you can remove at least one vector without changing the span.
Because a dependent vector is already “covered” by the rest.
2) Independence implies uniqueness of representation (relative to that set).
If a vector x can be written as
x = c₁v₁ + … + cₙvₙ
and also as
x = d₁v₁ + … + dₙvₙ,
subtract:
0 = (c₁−d₁)v₁ + … + (cₙ−dₙ)vₙ.
If {vᵢ} are independent, then cᵢ−dᵢ = 0 for all i ⇒ cᵢ = dᵢ.
So the coefficients are unique.
This is a key bridge to coordinates and bases: if a set is a basis, you want every vector to have exactly one coordinate representation.
A basis of a vector space is meant to be:
Independence supplies the “no extra” part.
Suppose S = {v₁,…,vₙ} spans a space (or subspace) W.
So the workflow to find a basis often looks like:
1) Start with some spanning set.
2) Remove dependent vectors (using row reduction / pivot columns).
3) What remains is independent and still spans.
The independence test Ac=0 is a homogeneous linear system.
This directly parallels solving Ax=b:
In data matrices, columns often represent features.
Independence is the clean mathematical form of “no feature is exactly redundant.”
A deep fact (formalized in the next node) is:
So independence is the counting principle that makes “dimension” meaningful.
If you internalize one guiding sentence:
Independence means “every vector added increases the number of available directions.”
Basis and dimension make that sentence precise.
Let v₁ = (1, 2) and v₂ = (3, 6). Determine whether {v₁, v₂} is linearly independent.
Start from the definition:
c₁v₁ + c₂v₂ = 0.
Write in coordinates:
c₁(1,2) + c₂(3,6) = (0,0).
Add component-wise:
(c₁ + 3c₂, 2c₁ + 6c₂) = (0,0).
Equate components to get a system:
c₁ + 3c₂ = 0
2c₁ + 6c₂ = 0
Notice the second equation is just 2× the first, so there are infinitely many solutions.
Solve the first:
c₁ = −3c₂.
Pick a nonzero value, e.g. c₂ = 1 ⇒ c₁ = −3.
Then:
(−3)v₁ + 1·v₂ = 0
so the combination is nontrivial.
Insight: The set is dependent because v₂ = 3v₁. In ℝ², two vectors are independent exactly when they are not scalar multiples.
Let v₁ = (1,0,1), v₂ = (2,1,0), v₃ = (3,1,1). Test whether {v₁, v₂, v₃} is linearly independent.
Form the matrix with these as columns:
A = [ v₁ v₂ v₃ ] =
⎡1 2 3⎤
⎢0 1 1⎥
⎣1 0 1⎦
We test Ac = 0. Row-reduce A.
Start with:
⎡1 2 3⎤
⎢0 1 1⎥
⎣1 0 1⎦
Eliminate the 1 under the first pivot (Row3 ← Row3 − Row1):
Row3: (1,0,1) − (1,2,3) = (0, −2, −2)
So we have:
⎡1 2 3⎤
⎢0 1 1⎥
⎣0 −2 −2⎦
Make Row3 simpler (Row3 ← (−1/2)Row3):
Row3 becomes (0,1,1)
⎡1 2 3⎤
⎢0 1 1⎥
⎣0 1 1⎦
Now subtract Row2 from Row3 (Row3 ← Row3 − Row2):
Row3 becomes (0,0,0)
⎡1 2 3⎤
⎢0 1 1⎥
⎣0 0 0⎦
A row of zeros means we have fewer than 3 pivots, so at least one free variable in Ac=0 ⇒ nontrivial solutions exist ⇒ dependent.
Insight: Row3 becoming zero means one column is a linear combination of the others. In fact v₃ = v₁ + v₂ because (1,0,1) + (2,1,0) = (3,1,1).
Assume {v₁, v₂, v₃} is linearly independent. Suppose a vector x has two representations:
x = c₁v₁ + c₂v₂ + c₃v₃
x = d₁v₁ + d₂v₂ + d₃v₃.
Show that cᵢ = dᵢ for all i.
Subtract the two equations:
x − x = (c₁v₁ + c₂v₂ + c₃v₃) − (d₁v₁ + d₂v₂ + d₃v₃).
Simplify left side:
0 = (c₁−d₁)v₁ + (c₂−d₂)v₂ + (c₃−d₃)v₃.
This is a linear combination equaling 0.
Because the set is independent, the only solution is the trivial one:
c₁−d₁ = 0
c₂−d₂ = 0
c₃−d₃ = 0.
Therefore:
c₁ = d₁, c₂ = d₂, c₃ = d₃.
Insight: Independence is exactly the condition needed for “coordinates” in a set of vectors to be well-defined (unique). This is why bases must be independent.
Linear independence means: c₁v₁ + … + cₙvₙ = 0 ⇒ c₁ = … = cₙ = 0.
Linear dependence means there exists a nontrivial coefficient choice making the zero vector: some cancellation is possible.
Dependency is equivalent to redundancy: at least one vector can be written as a linear combination of the others.
To test independence in ℝᵐ, put vectors as columns of A and solve Ac=0 (row reduce; check for pivots in every column).
If a set contains 0, it is automatically dependent.
In ℝᵐ, any set of more than m vectors must be dependent (there isn’t enough dimension).
If vectors are independent, representations in terms of them are unique (subtract two representations to get a zero-combination).
Thinking “independent” means “orthogonal.” Orthogonality implies independence (if vectors are nonzero), but independence does not require right angles.
Checking only one equation/component and concluding dependence or independence; you must satisfy the full vector equation (all components).
Assuming that because vectors ‘look different’ they must be independent; dependence can be subtle (e.g., v₃ = v₁ + v₂).
Forgetting that the trivial combination always exists, and mistakenly calling a set dependent because c₁=…=cₙ=0 solves the equation.
Decide whether {v₁, v₂} is linearly independent in ℝ², where v₁ = (2, −1) and v₂ = (−4, 2).
Hint: Check whether one vector is a scalar multiple of the other. If v₂ = kv₁ for some k, the set is dependent.
v₂ = (−4, 2) = (−2)(2, −1) = (−2)v₁. So (−2)v₁ − 1·v₂ = 0 is nontrivial ⇒ the set is linearly dependent.
Test whether v₁ = (1,1,0), v₂ = (0,1,1), v₃ = (1,2,1) are linearly independent in ℝ³.
Hint: Place them as columns of A and row-reduce, or try to see if v₃ = v₁ + v₂.
Observe v₁ + v₂ = (1,1,0) + (0,1,1) = (1,2,1) = v₃. Therefore v₃ − v₁ − v₂ = 0 is a nontrivial linear combination ⇒ the set is linearly dependent.
Let v₁, v₂, v₃ be vectors in a vector space V. Suppose v₁ and v₂ are linearly independent, and v₃ = 5v₁ − 2v₂. Is {v₁, v₂, v₃} linearly independent?
Hint: Use the redundancy equivalence: if one vector is a linear combination of the others, the set is dependent.
Because v₃ is explicitly a linear combination of v₁ and v₂, the set {v₁, v₂, v₃} is linearly dependent. A nontrivial zero relation is v₃ − 5v₁ + 2v₂ = 0.
Next up: Basis and Dimension
Related nodes you may want (if available in your tech tree):