Minimal spanning set. Number of vectors in a basis.
Deep-dive lesson - accessible entry point but dense material. Use worked examples and spaced repetition.
If linear independence tells you “no redundancy,” then a basis answers the next question: “What’s the smallest set of directions I need to build everything in this space?” Dimension is the count of those directions.
A set S spans a vector space V if every vector in V can be written as a linear combination of vectors in S. A basis is a set that is both spanning and linearly independent (equivalently: a minimal spanning set or a maximal independent set). In a finite-dimensional space, every basis has the same number of vectors; that number is dim(V).
When you work with vectors, you often want a coordinate system: a way to describe any vector using a list of numbers. A basis is exactly the information needed to create such a coordinate system—no more, no less.
These two desires correspond to:
A basis is the sweet spot that satisfies both.
Let S = {v₁, v₂, …, vₖ} be a set of vectors in a vector space V.
The span of S, written span(S), is the set of all linear combinations:
span(S) = { a₁v₁ + a₂v₂ + … + aₖvₖ : a₁, …, aₖ are scalars }
If span(S) = V, then S is a spanning set of V.
Intuition: you can “mix” the vectors in S (by scaling and adding) to reach any vector in V.
A set B ⊂ V is a basis of V if:
1) B spans V, and
2) B is linearly independent.
There are two equivalent ways to think about a basis that are often more practical:
These equivalences are not just “nice facts”—they explain why a basis is the “just right” set.
If V has a finite basis, then V is finite-dimensional.
The dimension of V, written dim(V), is the number of vectors in any basis of V.
A crucial theorem (we’ll use it as a guiding rule):
In a finite-dimensional vector space, every basis has the same number of vectors.
So dim(V) is well-defined.
Examples you already know intuitively:
But bases are not unique: ℝ² has infinitely many different bases, all with exactly 2 vectors.
Linear independence tells you whether vectors are redundant. But you can have a set with no redundancy that still doesn’t reach the whole space.
For example, in ℝ², the single vector (1, 0) is linearly independent (a single nonzero vector always is), but it cannot reach (0, 1). So it does not span ℝ².
Spanning is about coverage.
Let V be a vector space and S = {v₁, …, vₖ} ⊂ V.
To say w ∈ span(S) means:
∃ scalars a₁, …, aₖ such that
w = a₁v₁ + a₂v₂ + … + aₖvₖ
So proving span(S) = V usually means:
In ℝ²:
In ℝ³:
This suggests a key theme:
Adding independent vectors tends to increase the “reach” of the span.
Suppose S = {v₁, …, vₖ} in ℝⁿ. Put them as columns of a matrix:
A = [ v₁ v₂ … vₖ ]
Then asking whether a vector w is in span(S) is the same as asking whether the linear system has a solution:
Ax = w
And asking whether S spans ℝⁿ is the same as asking:
For every w ∈ ℝⁿ, does Ax = w have a solution?
In ℝⁿ, this is equivalent to A having a pivot in every row (i.e., rank(A) = n). You don’t need full rank theory yet, but it’s useful to recognize the workflow: spanning is a “can we solve for coefficients?” question.
If a spanning set has extra vectors, some are unnecessary. For example, in ℝ²:
S = {(1, 0), (0, 1), (1, 1)}
This spans ℝ², but it is not minimal: (1, 1) = (1, 0) + (0, 1), so removing it doesn’t break spanning.
This naturally motivates the definition of a basis as a minimal spanning set.
A spanning set can be too large.
A linearly independent set can be too small.
A basis avoids both problems.
Let B = {b₁, …, bₖ}.
Suppose B is a basis of V and
v = a₁b₁ + … + aₖbₖ
and also
v = c₁b₁ + … + cₖbₖ
Subtract the two equations:
0 = (a₁ − c₁)b₁ + … + (aₖ − cₖ)bₖ
Because B is linearly independent, the only linear combination giving 0 is the trivial one:
(a₁ − c₁) = 0, …, (aₖ − cₖ) = 0
So aᵢ = cᵢ for all i.
Conclusion: the representation of v in a basis is unique.
This is the practical reason bases matter: they give a stable coordinate system.
Assume B spans V and is linearly independent.
Take any bⱼ ∈ B. If you remove it and still span V, then bⱼ would be a linear combination of the remaining vectors (because the remaining vectors could build every vector, including bⱼ). That would contradict independence.
So removing any vector breaks spanning.
Hence:
Spanning + independence ⇒ minimal spanning.
Conversely, if a set spans V and is minimal (removing anything breaks spanning), then it must be independent. Otherwise one vector would be a linear combination of the others, and removing it would not change the span—contradiction.
So:
Basis ⇔ minimal spanning set.
Dimension is defined as:
dim(V) = number of vectors in any basis of V.
But why does this not depend on which basis you pick?
The key fact is:
In a finite-dimensional vector space, all bases have the same number of vectors.
A useful intuition (not a full proof):
More generally, there is an important relationship:
In a finite-dimensional vector space V, every linearly independent set has size ≤ any spanning set.
So if B and C are both bases, then:
Therefore |B| = |C|.
That shared size is dim(V).
We’ll use these in examples and exercises.
Once you pick a basis B = {b₁, …, bₖ}, every vector v has unique coordinates (a₁, …, aₖ) such that:
v = ∑ᵢ aᵢ bᵢ
Those coordinates depend on the basis, but the vector v does not.
This viewpoint becomes essential when you:
Dimension controls what is possible:
It also tells you when redundancy must exist:
This becomes the backbone of many later tools.
An orthonormal basis is a basis whose vectors are mutually perpendicular (orthogonal) and have length 1.
Why is that special?
But orthogonality only makes sense once you understand what a basis is and why dimension fixes how many basis vectors you need.
So this node sets up the question for the next one:
“Can we choose a basis with extra geometric structure (perpendicular, unit length) to make computations simpler?”
That is exactly what Orthogonality addresses.
Let S = { v₁, v₂ } with v₁ = (1, 2) and v₂ = (2, 4) in ℝ². Determine span(S). Is S a basis of ℝ²?
Observe that v₂ is a multiple of v₁:
v₂ = (2, 4) = 2(1, 2) = 2v₁.
So any linear combination of v₁ and v₂ looks like:
av₁ + bv₂ = av₁ + b(2v₁)
= (a + 2b)v₁.
Therefore span(S) = span({v₁}). Geometrically, this is a line through the origin in direction (1, 2).
Because S is linearly dependent (one vector is a multiple of the other), it cannot be a basis.
Also, span(S) is only a line, not all of ℝ², so S does not span ℝ² either.
Insight: Two vectors in ℝ² only form a basis if they are not scalar multiples. Dependence collapses the span to a lower-dimensional subspace.
Let B = { b₁, b₂, b₃ } in ℝ³ where b₁ = (1, 0, 1), b₂ = (0, 1, 1), b₃ = (1, 1, 0). Show B spans ℝ³ by expressing an arbitrary w = (x, y, z) as a linear combination of B.
We want scalars a, b, c such that:
ab₁ + bb₂ + cb₃ = (x, y, z).
Write the linear combination component-wise:
a(1,0,1) + b(0,1,1) + c(1,1,0)
= (a + c, b + c, a + b).
Set equal to (x, y, z) to get the system:
a + c = x
b + c = y
a + b = z
Solve step-by-step.
From a + c = x ⇒ a = x − c.
From b + c = y ⇒ b = y − c.
Plug into a + b = z:
(x − c) + (y − c) = z
x + y − 2c = z
−2c = z − x − y
c = (x + y − z)/2.
Back-substitute:
a = x − (x + y − z)/2 = (2x − x − y + z)/2 = (x − y + z)/2
b = y − (x + y − z)/2 = (2y − x − y + z)/2 = (−x + y + z)/2.
We found a, b, c for an arbitrary (x, y, z). Therefore every vector in ℝ³ lies in span(B), so span(B) = ℝ³.
Insight: To prove a set spans, take an arbitrary vector and solve for coefficients. If you can always solve (with no restrictions on x, y, z), the set spans the whole space.
Let V = { (x, y, z) ∈ ℝ³ : x + y + z = 0 }. Consider u₁ = (1, −1, 0) and u₂ = (1, 0, −1). Show {u₁, u₂} is a basis of V and find dim(V).
First check u₁ and u₂ lie in V:
For u₁: 1 + (−1) + 0 = 0 ⇒ u₁ ∈ V.
For u₂: 1 + 0 + (−1) = 0 ⇒ u₂ ∈ V.
Check linear independence:
Suppose au₁ + bu₂ = 0.
Then a(1, −1, 0) + b(1, 0, −1) = (0,0,0).
Compute components:
(a + b, −a, −b) = (0, 0, 0).
So −a = 0 ⇒ a = 0, and −b = 0 ⇒ b = 0. Therefore {u₁, u₂} is linearly independent.
Now show they span V:
Take an arbitrary (x, y, z) ∈ V, so x + y + z = 0.
We want a, b such that au₁ + bu₂ = (x, y, z).
Solve:
(a + b, −a, −b) = (x, y, z).
From −a = y ⇒ a = −y.
From −b = z ⇒ b = −z.
Then a + b = −y − z.
But since x + y + z = 0, we have x = −y − z.
So a + b = x is automatically satisfied.
Thus every vector in V can be expressed as a combination of u₁ and u₂, so they span V.
Therefore {u₁, u₂} is a basis of V, and dim(V) = 2.
Insight: Subspaces defined by one linear equation in ℝ³ often have dimension 2 (a plane through the origin). A basis gives you a concrete coordinate system on that plane.
span(S) is the set of all linear combinations of vectors in S; S spans V when span(S) = V.
A basis is a set that is both spanning and linearly independent.
Basis ⇔ minimal spanning set ⇔ maximal linearly independent set (in finite-dimensional spaces).
In a basis, every vector has a unique coordinate representation.
dim(V) is the number of vectors in any basis of V; it is well-defined because all bases have the same size.
In a finite-dimensional space, any set with more than dim(V) vectors is linearly dependent.
In a k-dimensional space, a set of k vectors is a basis iff it is linearly independent (equivalently iff it spans).
Thinking “spanning” means you can reach some vectors—spanning means you can reach every vector in the space.
Assuming any spanning set is a basis; spanning sets can have redundancy (dependence).
Forgetting that basis vectors must belong to the space V (especially for subspaces defined by constraints).
Believing different bases can have different sizes; in finite-dimensional spaces, all bases have the same number of vectors.
In ℝ², let S = { (1, 1), (1, −1) }. (a) Does S span ℝ²? (b) Is S a basis of ℝ²?
Hint: Try to solve a(1,1) + b(1,−1) = (x,y) for arbitrary x,y. Or check whether the vectors are scalar multiples.
Solve a(1,1) + b(1,−1) = (x,y).
Component-wise: (a + b, a − b) = (x, y).
Add equations: (a + b) + (a − b) = x + y ⇒ 2a = x + y ⇒ a = (x + y)/2.
Subtract: (a + b) − (a − b) = x − y ⇒ 2b = x − y ⇒ b = (x − y)/2.
A solution exists for all x,y, so S spans ℝ². The two vectors are not multiples, so they are independent. Therefore S is a basis.
Let V be the subspace of ℝ³ given by V = { (x, y, z) : z = 0 }. Find a basis for V and determine dim(V).
Hint: Vectors in V look like (x, y, 0). Try to express (x, y, 0) using two simple vectors.
Any (x, y, 0) can be written as x(1,0,0) + y(0,1,0). So { (1,0,0), (0,1,0) } spans V. These two vectors are linearly independent, hence they form a basis. Therefore dim(V) = 2.
Let S = { (1,0,0), (0,1,0), (1,1,0) } in ℝ³. (a) Find span(S). (b) Is S a basis of span(S)? (c) Find a basis of span(S) with fewer vectors.
Hint: All vectors have z = 0. Also, check whether (1,1,0) is a linear combination of the first two vectors.
(a) Every linear combination of S has the form a(1,0,0) + b(0,1,0) + c(1,1,0) = (a + c, b + c, 0). This is any vector (x,y,0), so span(S) = { (x,y,0) } (the z=0 plane).
(b) S is not a basis of span(S) because it is linearly dependent: (1,1,0) = (1,0,0) + (0,1,0).
(c) A basis with fewer vectors is { (1,0,0), (0,1,0) }. It spans the same set and is independent.
Next, you’ll use bases with special geometric structure: Orthogonality. Related foundations include linear independence (prerequisite) and upcoming ideas like change of basis and matrix representations of linear maps (future nodes).