Sets with addition and scalar multiplication satisfying axioms.
Self-serve tutorial - low prerequisites, straightforward concepts.
You already know how to add arrows in the plane and scale them. A vector space is the upgrade that says: “Actually, those rules can apply to lots of things that aren’t arrows”—like polynomials, signals, images, and functions—so long as addition and scaling behave consistently.
A vector space over a field F is a set V with (1) vector addition making V an abelian group and (2) scalar multiplication by elements of F, linked by distributive/associative axioms. The payoff is that “linear combinations” make sense, unlocking linear independence, bases, and linear maps.
In geometry, vectors look like arrows. But in linear algebra, “vector” really means: an object you can add and scale in a way that behaves like ordinary arithmetic.
That’s powerful because it lets us treat many different domains with one toolkit:
The key is not the shape of objects—it’s the rules.
A vector space is a set together with:
1) Vector addition: a function
2) Scalar multiplication: a function (often written just as )
where is a field (the scalars). Typical choices are or . The field matters because we need to add/multiply scalars and have identities/inverses in the scalar world.
We say: “ is a vector space over .”
You can memorize 8–10 axioms, but it’s better to group them into two clusters plus “compatibility.”
For all u, v, w ∈ V:
This is exactly: “(V, +) is an abelian group.”
For all and v ∈ V:
For all and u, v ∈ V:
If you can:
then linear algebra becomes available.
In this lesson, we’ll keep returning to the same habit: test closure + distributivity early—those are frequent failure points, especially in non-geometric examples.
Scalar multiplication is only meaningful if “adding vectors” already forms a stable arithmetic world. The abelian-group requirements guarantee you can:
These are what make expressions like
well-defined.
Closure under addition is often the first axiom that fails.
Take u = (1, 2), v = (3, −5). Then
(1, 2) + (3, −5) = (4, −3) ∈ .
So closure holds.
Let
Then
This is still a polynomial of degree ≤ 2. So closure holds.
Let and . Then
But 0 is not degree exactly 2, so the set is not closed under addition.
The lesson: degree ≤ n works; degree exactly n usually fails.
The additive identity depends on what the objects are.
You should think: “the zero object under addition.”
If v is in the set, then −v must also be in the set.
If , then −, which is still a polynomial (and still degree ≤ 2 if we’re in that set).
If , then − is still a function of the same type.
But beware: if your set is restricted (e.g., “functions that are always nonnegative”), additive inverses typically fail.
Think of functions as “curves.” Adding functions adds their y-values pointwise.
Take two functions and . Their sum is:
Here’s a simple ASCII snapshot at a few x-values:
| x | f(x) | g(x) | (f+g)(x) |
|---|---|---|---|
| 0 | 1 | 2 | 3 |
| 1 | 0 | 4 | 4 |
| 2 | -1 | 1 | 0 |
If your set is “all real-valued functions,” the sum is still real-valued at every x, so closure holds.
Interactive canvas idea (guided): Plot two curves (e.g., , ). Add a toggle “show f+g.” Let learners drag a point x₀ and observe the vertical addition of y-values. The closure message becomes: “the result is still a function in the same set.”
Scalar multiplication is what lets you talk about size and direction (or more abstractly: intensity, amplitude, coefficient scaling). It’s also what makes linear combinations possible:
But scalar multiplication must be consistent with addition; otherwise linear combinations become ambiguous and algebra breaks.
For every and v ∈ V, we need .
Let and .
If and , then
Still degree ≤ 2. Closure holds.
If you try to treat as a vector space over , you run into a deeper issue: is not a field (no multiplicative inverses for most integers). Many theorems break, and it’s not a vector space by definition.
So the “F (field)” requirement is not decoration—it’s structural.
There are two distributive laws and they fail in many “almost vector spaces.”
Function visualization (pointwise):
Let , , .
Same function.
Polynomial visualization (coefficient scaling):
Let and , .
Compute the left side:
Compute the right side:
They match.
This says: it doesn’t matter whether you scale by then , or scale once by .
In this is obvious; in function spaces it is still pointwise obvious:
If , then
This is the “do nothing” scale factor.
In polynomials: .
In matrices: .
When testing “is this a vector space?”, use a consistent order:
1) What is V? (the set)
2) What is F? (the scalars)
3) How are + and scalar multiplication defined?
4) Closure tests (fast failure):
5) Zero and inverse (often fails in restricted sets):
6) Distributivity + identity (consistency)
Interactive canvas idea: Provide toggles for each axiom. For a candidate set (e.g., “nonnegative functions”), clicking “additive inverse” highlights a counterexample: pick then show −f(x)=−1 not in the set.
This gives learners a reliable mental procedure instead of a memorized list.
Most linear algebra concepts are really about linear combinations:
To even talk about this expression, you need:
That’s exactly what the vector space axioms guarantee.
Think of v₁, v₂ as ingredients and scalars as amounts.
If V is a vector space, then every recipe output is still a valid element of V.
Let (polynomials degree ≤ 2 over ). Take
, , .
A linear combination is
which is exactly “an arbitrary quadratic.”
So the vector space structure explains why $
\{1, x, x^2\}P_2$.
The next concept—linear independence—asks whether a set of vectors contains redundancy.
A set is linearly independent if the only way to get the zero vector from a linear combination is the trivial way.
Formally (over field F):
This definition relies on:
So: vector spaces are the rules of the game; linear independence is one of the key strategies.
Let be the set of continuous functions on [0,1], scalars .
Vector-space thinking leads directly to Fourier series, least squares, and projections.
Let be the set of all real matrices, scalars .
This becomes central in machine learning: datasets, parameter matrices, and gradients live in vector spaces.
Whenever you meet a new “vector-like” object (polynomials, functions, sequences, matrices), ask:
That habit will make the next nodes (linear independence, span, basis) feel natural instead of magical.
Let V = { p(x) : p is a real polynomial with degree ≤ 2 }. Let F = ℝ. Define addition and scalar multiplication in the usual way: (p+q)(x)=p(x)+q(x), and (ap)(x)=a·p(x). Decide whether V is a vector space over F.
Step 1: Identify what must be shown.
We must verify:
(1) (V,+) is an abelian group, and
(2) scalar multiplication by ℝ satisfies closure and compatibility axioms.
Step 2: Check closure under addition.
Take arbitrary p,q ∈ V.
Write p(x)=a₂x²+a₁x+a₀ and q(x)=b₂x²+b₁x+b₀.
Then
(p+q)(x)=(a₂+b₂)x²+(a₁+b₁)x+(a₀+b₀).
This is still degree ≤ 2, so p+q ∈ V.
Step 3: Check associativity and commutativity of addition.
For any polynomials p,q,r, we have pointwise:
((p+q)+r)(x)=(p(x)+q(x))+r(x)=p(x)+(q(x)+r(x))=(p+(q+r))(x).
Similarly p(x)+q(x)=q(x)+p(x) implies p+q=q+p.
So associativity and commutativity hold.
Step 4: Find the additive identity.
Let 0(x)=0 for all x (the zero polynomial).
Then (p+0)(x)=p(x)+0=p(x), so p+0=p.
Thus the additive identity exists and is in V.
Step 5: Check additive inverses.
For p(x)=a₂x²+a₁x+a₀, define (−p)(x)=−a₂x²−a₁x−a₀.
Then (p+(−p))(x)=0 for all x, so p+(−p)=0.
Also −p still has degree ≤ 2, so −p ∈ V.
Step 6: Check closure under scalar multiplication.
Take a ∈ ℝ and p(x)=a₂x²+a₁x+a₀.
Then (ap)(x)=a·a₂x²+a·a₁x+a·a₀, still degree ≤ 2.
So ap ∈ V.
Step 7: Check distributivity and scalar rules.
For any a,b ∈ ℝ and p,q ∈ V, pointwise:
(a(p+q))(x)=a(p(x)+q(x))=ap(x)+aq(x)=((ap)+(aq))(x).
((a+b)p)(x)=(a+b)p(x)=ap(x)+bp(x)=((ap)+(bp))(x).
(ab)p(x)=a(bp(x)) and 1·p(x)=p(x).
Therefore all compatibility axioms hold.
Insight: P₂ works because “degree ≤ 2” is stable under addition and scaling. Many near-misses fail only because the set isn’t closed (e.g., degree exactly 2, or monic polynomials).
Let V = { f : [0,1] → ℝ | f is continuous and f(x) ≥ 0 for all x }. Let F = ℝ. Operations are pointwise: (f+g)(x)=f(x)+g(x) and (af)(x)=a·f(x). Determine if V is a vector space over ℝ.
Step 1: Try the fastest failure checks (closure + inverses).
Because V has an inequality restriction (f(x) ≥ 0), additive inverses are suspicious.
Step 2: Check closure under addition.
Take f,g ∈ V.
For each x, f(x) ≥ 0 and g(x) ≥ 0, so f(x)+g(x) ≥ 0.
Also f+g is continuous.
Thus f+g ∈ V. So addition closure passes.
Step 3: Check closure under scalar multiplication.
Take a ∈ ℝ and f ∈ V.
If a ≥ 0, then af(x) ≥ 0, so af ∈ V.
But the axiom requires closure for all scalars a ∈ ℝ, including negative scalars.
Step 4: Produce a counterexample with a negative scalar.
Let f(x)=1 (constant function). Then f ∈ V.
Take a=−1.
Then (af)(x)=−1 for all x, which is not ≥ 0.
So af ∉ V.
Scalar multiplication closure fails.
Step 5: (Optional) Note the additive inverse failure as well.
If f(x)=1 ∈ V, then −f(x)=−1 is not in V.
So additive inverses fail too.
Insight: Sets defined by “≥ 0” constraints usually fail to be vector spaces over ℝ because you can’t multiply by negative scalars and stay inside the set.
Let V = ℝ² and F = ℝ. Define scalar multiplication as usual: a(x,y)=(ax,ay). But define addition by (x₁,y₁) ⊕ (x₂,y₂) = (x₁+x₂+1, y₁+y₂). Is (V, ⊕, ·) a vector space over ℝ?
Step 1: Check closure under ⊕.
For any (x₁,y₁),(x₂,y₂) ∈ ℝ², we get (x₁+x₂+1, y₁+y₂) ∈ ℝ².
So closure under addition holds.
Step 2: Find the additive identity with respect to ⊕.
We need an element e=(e₁,e₂) such that (x,y) ⊕ (e₁,e₂) = (x,y).
Compute:
(x,y) ⊕ (e₁,e₂) = (x+e₁+1, y+e₂).
Set equal to (x,y):
x+e₁+1=x ⇒ e₁=−1,
y+e₂=y ⇒ e₂=0.
So the identity would be e=(−1,0), which exists in V.
Step 3: Check compatibility with scalar multiplication (likely failure).
A key axiom is distributivity:
a( u ⊕ v ) should equal au ⊕ av.
Let u=(0,0), v=(0,0), a=2.
Compute left side:
u ⊕ v = (0+0+1,0+0)=(1,0).
Then a(u ⊕ v) = 2(1,0)=(2,0).
Compute right side:
au=(0,0), av=(0,0).
Then au ⊕ av = (0+0+1,0+0)=(1,0).
Left ≠ right, since (2,0) ≠ (1,0).
Step 4: Conclude.
Distributivity fails, so this structure is not a vector space.
Insight: You can invent an “addition” that’s closed and even has an identity, but the moment distributivity fails, linear combinations stop behaving predictably. Distributivity is a core integrity check.
A vector space is a set V with addition and scalar multiplication over a field F that satisfy specific axioms.
Addition must make V an abelian group: closure, associativity, commutativity, zero element, and additive inverses.
Scalar multiplication must be closed and must interact with addition via distributivity and associativity: a(u+v)=au+av, (a+b)v=av+bv, (ab)v=a(bv), 1v=v.
The “zero vector” depends on the space (zero polynomial, zero function, zero matrix), not just (0,0).
Many non-geometry sets are vector spaces (polynomials ≤ n, all functions of a certain type), and many almost-examples fail due to closure or inverses.
Inequality-restricted sets (like nonnegative functions) typically fail because negative scaling breaks closure.
A reliable checklist (define V, F, operations; test closure, zero/inverses, distributivity) beats memorizing axioms in isolation.
Vector spaces are the foundation that makes linear combinations—and thus linear independence—well-defined.
Forgetting to specify the field F (scalars) and assuming it’s always ℝ; the choice of F matters.
Assuming the zero vector is always (0,0) instead of identifying the additive identity in the given space.
Checking a couple axioms (like closure) but skipping distributivity/identity, where many custom operations fail.
Using sets that are not closed (e.g., degree exactly 2 polynomials, positive-length vectors, invertible matrices under usual addition) and missing the closure failure.
Let V be the set of all real 2×2 matrices. With usual matrix addition and scalar multiplication over ℝ, is V a vector space?
Hint: Ask: is the sum of two 2×2 matrices still 2×2? Is scalar multiple still 2×2? Do usual arithmetic properties hold entrywise?
Yes. Closure holds because adding/scaling entrywise keeps you in 2×2 matrices. The zero vector is the zero matrix. Additive inverses are negatives of matrices. Distributivity and scalar associativity hold entrywise, inherited from ℝ.
Let V = { (x,y) ∈ ℝ² : x + y = 1 } with usual addition and scalar multiplication over ℝ. Is V a vector space?
Hint: Test closure under addition using two generic points satisfying x+y=1.
No. Take u=(1,0) and v=(0,1). Both satisfy x+y=1. But u+v=(1,1), and 1+1=2 ≠ 1, so closure under addition fails.
Let V be the set of all polynomials with real coefficients that satisfy p(0)=0. With usual addition and scalar multiplication over ℝ, is V a vector space?
Hint: Check closure by evaluating (p+q)(0) and (ap)(0). Identify the zero element and additive inverses.
Yes. If p(0)=0 and q(0)=0, then (p+q)(0)=p(0)+q(0)=0, so closed under addition. If a∈ℝ, then (ap)(0)=a·p(0)=0, so closed under scalar multiplication. Zero polynomial satisfies p(0)=0 and serves as identity; inverses −p also satisfy (−p)(0)=−p(0)=0. Other axioms follow from usual polynomial arithmetic.
Next up: Linear Independence
Related future nodes you’ll likely encounter: