Infinite polynomial approximation of functions around a point.
Deep-dive lesson - accessible entry point but dense material. Use worked examples and spaced repetition.
Many functions are complicated globally but simple locally. Taylor series formalize that idea: near a point a, a smooth function behaves like a polynomial whose coefficients are determined entirely by derivatives at a.
A Taylor series expresses (when possible) a function f(x) as an infinite power series around a center a: f(x) = ∑_{n=0}^∞ f⁽ⁿ⁾(a)/n! · (x−a)ⁿ. Truncating after n terms gives the nth-degree Taylor polynomial, a practical approximation near a. The series may converge only within a radius of convergence R, and even where it converges it may or may not equal the original function—so you must check convergence and (when needed) remainder/error behavior.
Polynomials are the “friendly” functions of calculus:
But many important functions are not polynomials: eˣ, sin x, cos x, ln x, 1/(1−x), etc. Taylor series is the bridge: it turns a function (near some center a) into an infinite polynomial-like object.
The core idea is local approximation: if you zoom in near x = a, a smooth function looks more and more like its tangent line; if you zoom in further, a quadratic improves the fit; then a cubic, and so on.
A power series about a is an infinite sum
∑_{n=0}^∞ cₙ (x−a)ⁿ.
A Taylor series is a particular power series for a function f whose coefficients are chosen to match derivatives of f at the center a:
f(x) = ∑_{n=0}^∞ \[ f⁽ⁿ⁾(a) / n! \] (x−a)ⁿ.
The notation f⁽ⁿ⁾ means the nth derivative: f⁽⁰⁾ = f, f⁽¹⁾ = f′, f⁽²⁾ = f″, etc.
When a = 0, this special case is called a Maclaurin series:
f(x) = ∑_{n=0}^∞ \[ f⁽ⁿ⁾(0) / n! \] xⁿ.
A polynomial of degree n can match up to n derivatives at a point. Taylor series pushes this to “match all derivatives,” term by term.
Consider a polynomial approximation Pₙ(x) of degree n around a:
Pₙ(x) = c₀ + c₁(x−a) + c₂(x−a)² + ⋯ + cₙ(x−a)ⁿ.
If we require that Pₙ and f have the same derivatives at a up to order n:
then the coefficients are forced to be
cₖ = f⁽ᵏ⁾(a)/k!.
This is the fundamental mechanism: derivatives at the center determine the coefficients.
A Taylor series can exist (all derivatives exist) but still:
1) converge only on a limited interval, and/or
2) converge but not equal the original function.
So the full story has two parts:
An infinite series is a theoretical object; in computation and estimation, we nearly always use a finite truncation.
The nth-degree Taylor polynomial of f about a is
Tₙ(x) = ∑_{k=0}^n \[ f⁽ᵏ⁾(a) / k! \] (x−a)ᵏ.
This is the best polynomial of degree ≤ n that matches derivatives up to order n at x = a.
Start with the general polynomial form around a:
Tₙ(x) = c₀ + c₁(x−a) + c₂(x−a)² + ⋯ + cₙ(x−a)ⁿ.
Differentiate term-by-term:
Tₙ′(x) = c₁ + 2c₂(x−a) + 3c₃(x−a)² + ⋯ + n cₙ(x−a)ⁿ⁻¹.
Evaluate at x = a (so every (x−a) term becomes 0):
Tₙ′(a) = c₁.
Differentiate again:
Tₙ″(x) = 2c₂ + 3·2 c₃(x−a) + 4·3 c₄(x−a)² + ⋯
Evaluate at x = a:
Tₙ″(a) = 2c₂ ⇒ c₂ = Tₙ″(a)/2!.
Continuing this pattern, the kth derivative at a isolates cₖ multiplied by k!:
Tₙ⁽ᵏ⁾(a) = k! cₖ ⇒ cₖ = Tₙ⁽ᵏ⁾(a)/k!.
If we impose Tₙ⁽ᵏ⁾(a) = f⁽ᵏ⁾(a) for k = 0,…,n, then
cₖ = f⁽ᵏ⁾(a)/k!.
This is where the factorial comes from: repeated differentiation pulls down k·(k−1)·…·1 = k!.
To build Tₙ(x) about a:
1) Compute f(a), f′(a), f″(a), …, f⁽ⁿ⁾(a).
2) Plug into
Tₙ(x) = f(a) + f′(a)(x−a) + f″(a)/2! (x−a)² + ⋯ + f⁽ⁿ⁾(a)/n! (x−a)ⁿ.
| Object | Notation | What it is | Used for | Caveat | ||
|---|---|---|---|---|---|---|
| Taylor polynomial | Tₙ(x) | Finite sum up to degree n | Approximate f near a | Has error (remainder) | ||
| Taylor series | ∑_{n=0}^∞ … | Infinite sum | Exact representation when it converges to f | May converge only for | x−a | < R, may not equal f |
Define the remainder after degree n as
Rₙ(x) = f(x) − Tₙ(x).
Taylor’s theorem (in one common form) says that if f has n+1 derivatives near a, then
Rₙ(x) = f⁽ⁿ⁺¹⁾(ξ) / (n+1)! · (x−a)ⁿ⁺¹
for some ξ between a and x.
Even if you don’t use this exact form yet, it communicates a key lesson:
This explains why Taylor approximations are “local”: the small parameter is (x−a).
A Taylor series is an infinite sum. Infinite sums are only meaningful if they converge.
For a power series
∑_{n=0}^∞ cₙ (x−a)ⁿ,
typically there exists a number R (0 ≤ R ≤ ∞) called the radius of convergence such that:
So the “region where the series makes sense” is an interval (a−R, a+R) on the real line.
In many calculus settings, R is found via the ratio test. Consider terms
uₙ(x) = cₙ (x−a)ⁿ.
If
lim_{n→∞} |uₙ₊₁(x) / uₙ(x)|
= lim_{n→∞} |cₙ₊₁/cₙ| · |x−a|
= L · |x−a|,
then convergence typically requires L · |x−a| < 1, meaning |x−a| < 1/L. That value is R.
You don’t need every detail of series tests to use Taylor series effectively, but you do need the mindset:
Even within |x−a| < R, the series sum might not equal f(x) unless f is “nice enough.”
A common sufficient condition (not the only one) is: if the remainder Rₙ(x) → 0 as n → ∞ for x in some interval, then
f(x) = lim_{n→∞} Tₙ(x) = ∑_{n=0}^∞ f⁽ⁿ⁾(a)/n! (x−a)ⁿ.
Many standard functions (eˣ, sin x, cos x, ln(1+x) on its interval, rational functions away from poles) behave well.
A powerful intuition: the radius of convergence is often limited by the nearest point where the function “breaks” (e.g., division by zero or non-analytic behavior).
Example intuition:
You don’t have to master complex analysis here, but this perspective prevents surprises: the convergence radius is about the function’s analytic obstacles, not just real-valued smoothness.
If R is finite, you must check x = a ± R separately. It’s common to see:
That endpoint behavior matters when using a series to represent a function on a closed interval.
Taylor series is not just a calculus curiosity. It’s a central tool for:
You may already know the tangent-line approximation:
f(x) ≈ f(a) + f′(a)(x−a).
That is exactly T₁(x). Taylor series generalizes this:
Even in multivariable calculus, a closely related idea appears (Taylor expansion with gradients and Hessians). You’ll later see vectors like x and a, and approximations using ∇f and the Hessian matrix. (In this lesson, we stay 1D, but the conceptual jump is small.)
Certain Maclaurin series appear everywhere:
1) Exponential:
eˣ = ∑_{n=0}^∞ xⁿ/n! = 1 + x + x²/2! + x³/3! + ⋯
2) Sine and cosine:
sin x = ∑_{n=0}^∞ (−1)ⁿ x²ⁿ⁺¹/(2n+1)!
cos x = ∑_{n=0}^∞ (−1)ⁿ x²ⁿ/(2n)!
3) Geometric series (a gateway to many others):
1/(1−x) = ∑_{n=0}^∞ xⁿ for |x| < 1.
From (3), many manipulations become possible: integrate term-by-term, differentiate term-by-term, substitute x → −x, etc.
Suppose you need sin(0.1). A calculator uses algorithms that reduce to polynomial-like approximations internally.
Using Taylor:
sin x ≈ x − x³/3! + x⁵/5!.
At x = 0.1, higher powers shrink rapidly:
So a few terms give high accuracy.
The center a is not arbitrary—it’s a design decision.
A good mental model:
Later you’ll see:
Taylor series is one of the main reasons derivatives are so valuable: derivatives are not just slopes—they are information that determines local function behavior to arbitrary order.
We want T₄(x) for f(x) = eˣ at a = 0, then use it to approximate e^0.2.
Compute derivatives:
f(x) = eˣ
f′(x) = eˣ
f″(x) = eˣ
f‴(x) = eˣ
f⁽⁴⁾(x) = eˣ
Evaluate at a = 0:
f(0) = 1
f′(0) = 1
f″(0) = 1
f‴(0) = 1
f⁽⁴⁾(0) = 1
Form the Taylor polynomial:
T₄(x) = ∑_{k=0}^4 f⁽ᵏ⁾(0)/k! · xᵏ
= 1 + x + x²/2! + x³/3! + x⁴/4!
Plug in x = 0.2:
T₄(0.2) = 1 + 0.2 + 0.2²/2 + 0.2³/6 + 0.2⁴/24
= 1 + 0.2 + 0.04/2 + 0.008/6 + 0.0016/24
= 1 + 0.2 + 0.02 + 0.001333… + 0.0000666…
≈ 1.2214666…
Compare intuition:
The true value is e^0.2 ≈ 1.221402…
The approximation is already accurate to about 4 decimal places with only 5 terms.
Insight: Because eˣ has derivatives that stay the same and factorials grow fast, the terms xⁿ/n! shrink quickly for modest |x|. That’s why truncations of eˣ are especially effective.
We will compute derivatives of sin x at 0, identify the pattern, then write T₅(x) and use it as an approximation near 0.
Start with f(x) = sin x and compute derivatives:
f(x) = sin x
f′(x) = cos x
f″(x) = −sin x
f‴(x) = −cos x
f⁽⁴⁾(x) = sin x
f⁽⁵⁾(x) = cos x
Evaluate at 0:
f(0) = sin 0 = 0
f′(0) = cos 0 = 1
f″(0) = −sin 0 = 0
f‴(0) = −cos 0 = −1
f⁽⁴⁾(0) = sin 0 = 0
f⁽⁵⁾(0) = cos 0 = 1
Write the Taylor polynomial through degree 5:
T₅(x) = f(0) + f′(0)x + f″(0)/2! x² + f‴(0)/3! x³ + f⁽⁴⁾(0)/4! x⁴ + f⁽⁵⁾(0)/5! x⁵
Substitute the values:
T₅(x) = 0 + 1·x + 0·x² + (−1)/3! x³ + 0·x⁴ + 1/5! x⁵
= x − x³/6 + x⁵/120
Interpretation near 0:
sin x ≈ x − x³/6 + x⁵/120
The approximation improves as x gets closer to 0 because higher powers shrink fast.
Insight: Only odd powers appear because sin x is an odd function, and the derivatives at 0 alternate between 0, ±1. Symmetry of the function shows up directly in which Taylor coefficients vanish.
Consider the geometric series ∑_{n=0}^∞ xⁿ. We’ll see when it converges and why it equals 1/(1−x) there.
Consider partial sums S_N = 1 + x + x² + ⋯ + xᴺ.
Multiply by (1−x):
(1−x)S_N = S_N − xS_N
= (1 + x + x² + ⋯ + xᴺ) − (x + x² + ⋯ + xᴺ + xᴺ⁺¹)
= 1 − xᴺ⁺¹.
So for x ≠ 1:
S_N = (1 − xᴺ⁺¹)/(1−x).
Now take N → ∞. If |x| < 1 then xᴺ⁺¹ → 0, so
lim_{N→∞} S_N = 1/(1−x).
If |x| > 1 then xᴺ⁺¹ does not go to 0, so the series diverges.
At |x| = 1, check endpoints:
x = 1 gives 1 + 1 + 1 + ⋯ diverges.
x = −1 gives 1 − 1 + 1 − 1 + ⋯ does not converge in the usual sense.
Therefore the radius of convergence is R = 1, and the series equals 1/(1−x) for |x| < 1.
Insight: The function 1/(1−x) has a singularity (division by zero) at x = 1, exactly one unit away from the center 0. That nearest breakdown point matches the radius of convergence R = 1.
A Taylor series is a power series centered at a: f(x) = ∑_{n=0}^∞ f⁽ⁿ⁾(a)/n! · (x−a)ⁿ (when it converges to f).
The nth-degree Taylor polynomial Tₙ(x) is the truncation up to n; it matches f and its first n derivatives at x = a.
Factorials arise because the kth derivative of (x−a)ᵏ at a equals k!; this forces the coefficient cₖ = f⁽ᵏ⁾(a)/k!.
Taylor approximations are local: accuracy is typically good when |x−a| is small, and improves with higher degree.
Power series have a radius of convergence R: they converge for |x−a| < R and diverge for |x−a| > R; endpoints require separate checks.
Convergence of the Taylor series is not the same as equality to f; to claim representation, you need the remainder Rₙ(x) → 0 (or other analytic guarantees).
Standard expansions (eˣ, sin x, cos x, 1/(1−x), ln(1+x)) are reusable building blocks across calculus and applied math.
Assuming the Taylor series equals the function for all x without checking the radius of convergence (and endpoints).
Forgetting the center a and incorrectly writing powers of x instead of (x−a) when expanding around a ≠ 0.
Dropping the factorial: coefficients are f⁽ⁿ⁾(a)/n!, not just f⁽ⁿ⁾(a).
Using a low-degree polynomial far from a and expecting good accuracy (Taylor is local, not global).
Compute the 3rd-degree Taylor polynomial T₃(x) for f(x) = ln(1+x) centered at a = 0.
Hint: Differentiate ln(1+x) repeatedly and evaluate at x = 0. Watch the alternating signs and factorials.
f(x) = ln(1+x)
Derivatives:
f′(x) = 1/(1+x)
f″(x) = −1/(1+x)²
f‴(x) = 2/(1+x)³
Evaluate at 0:
f(0) = 0
f′(0) = 1
f″(0) = −1
f‴(0) = 2
Taylor polynomial:
T₃(x) = f(0) + f′(0)x + f″(0)/2! x² + f‴(0)/3! x³
= 0 + x + (−1)/2 x² + 2/6 x³
= x − x²/2 + x³/3.
Find the radius of convergence of the power series ∑_{n=1}^∞ n(x−2)ⁿ.
Hint: Use the ratio test on uₙ = n(x−2)ⁿ. Simplify |uₙ₊₁/uₙ|.
Let uₙ = n(x−2)ⁿ.
Compute the ratio:
= |(n+1)/n| · |x−2|
= (1 + 1/n) |x−2|.
Take n → ∞:
lim_{n→∞} |uₙ₊₁/uₙ| = 1 · |x−2| = |x−2|.
Ratio test gives convergence when |x−2| < 1.
So the radius of convergence is R = 1 (center a = 2).
Use the Maclaurin polynomial for sin x up to x⁵ to approximate sin(0.3). Give the numerical value of x − x³/6 + x⁵/120 at x = 0.3.
Hint: Compute 0.3³ and 0.3⁵, then apply the coefficients 1/6 and 1/120.
Use T₅(x) = x − x³/6 + x⁵/120.
At x = 0.3:
0.3³ = 0.027
0.3⁵ = 0.3²·0.3³ = 0.09·0.027 = 0.00243
Compute:
T₅(0.3) = 0.3 − 0.027/6 + 0.00243/120
= 0.3 − 0.0045 + 0.00002025
= 0.29552025.
So sin(0.3) ≈ 0.29552025 using the 5th-degree Maclaurin approximation.