Inner product of x(2,1,0) and y(-1,2,0) is zero.
xTy = yTx = 0
Zero is the only vector orthogonal to itself.
Zero is the only vector orthogonal to every other vector.
Types of Subspaces in ℝ³
A subspace must always contain the origin. Four types in 3D:
0D: Origin alone Z={0}
1D: Any line through origin
2D: Any plane through origin
3D: Entire ℝ³ space
Orthogonal Subspaces
Subspaces S and T are orthogonal if every x∈S is orthogonal to every y∈T.
xTy = 0
Z={0} is orthogonal to all subspaces.
In ℝ³, a line can be orthogonal to a plane.
In ℝ³, two planes cannot be orthogonal. → Tab 4
dim S + dim T ≤ n
Orthogonal complements: dim S + dim T = n
Here n=3, two lines: 1+1=2 ≤ 3 ✓
"A plane cannot be orthogonal to another plane"
In ℝ³, two 2D planes must intersect along a line. They share the Purple Vector v.
Since v is in both planes and vTv ≠ 0, the planes cannot be orthogonal subspaces.
Orthogonal Complement
Given a subspace V of ℝⁿ, the space of all vectors orthogonal to V is called the orthogonal complement of V, written V⊥ and read as "V perp".
Let W = V⊥ — the orthogonal complement of V.
If W = V⊥ then V = W⊥ and V⊥⊥ = V
dim V + dim W = n
Here n=3, dim V=1 → dim V⊥=2 ✓
The whole space ℝⁿ is decomposed into two perpendicular parts V and W = V⊥.
Every vector in ℝⁿ splits uniquely into a part in V and a part in V⊥.
Projection
Closest point in a subspace to b.
■ b — original vector
■ p — projection onto line a
■ e = b − p — error (perpendicular to a)
x̂ = aTb / aTa p = x̂a
Projection Matrix (onto line)
P = aaT / aTa
p = Pb
P is symmetric: P = PT
Its square is itself: P2 = P
Cauchy-Schwarz Inequality
Derived from the fact that error e = b − p has length ≥ 0
|aTb| ≤ ‖a‖ ‖b‖
Dividing both sides by ‖a‖‖b‖:
|aTb| / ‖a‖‖b‖ ≤ 1
That gives the definition of angle:
cos θ = aTb / (‖a‖ ‖b‖)
■ b — original vector
■ Plane — the subspace (columns of A)
■ p — projection onto plane
■ e = b − p — error (perpendicular to plane)
x̂ = (ATA)-1ATb p = Ax̂
Projection Matrix (onto plane)
P = A(ATA)-1AT
p = Pb
Remarks ▶
Remark 1 — Properties of P
The projection matrix P = A(ATA)-1AT has two basic properties:
(i) It equals its square: P2 = P
(ii) It equals its transpose: PT = P
(iii)I − P projects onto the perpendicular complement.
Conversely, any symmetric matrix with P2 = P represents a projection.
Note: There may be a risk of confusion with permutation matrices, also denoted by P, but the risk should be small.
Remark 2 — b in column space
If b is already in the column space, projection gives back b itself:
p = A(ATA)-1ATAx = Ax = b
Remark 3 — b in left nullspace
If b ⊥ every column, ATb = 0, projection is zero:
p = A(ATA)-1 · 0 = 0
Remark 4 — A invertible
Column space is all of ℝⁿ, every vector projects to itself:
p = AA-1(AT)-1ATb = b
Only valid for square invertible A — not rectangular.
Remark 5 — A has one column
ATA reduces to the scalar aTa, returning to the line formula:
p = aaTb / aTa
Least Squares
Fit the best line through data with no exact solution.
◆ Familiar equation
y = mx + c
◆ Least squares line
Ax̂ = A(ATA)-1ATb = Pb
◆ What does each symbol mean?
In y = mx + c
x → input (independent variable)
y → output
m, c → unknowns (what we want to find)
In least squares
A → matrix of inputs
b → outputs (observed data)
x → unknown parameters
👉 Here, x = parameters (like m and c), not the input!
◆ Projection Matrix
p = Pb P = A(ATA)-1AT
The least squares solution is:
x̂ = (ATA)-1ATb
If you plug this back into Ax̂:
Ax̂ = A(ATA)-1ATb = Pb
👉 So:
Ax̂ = A(ATA)-1ATb = Pb — least square line
P = projection matrix
Least squares = computing that projection
The ordinary least-squares problem leading to x̂W comes from changing
Ax = b
to the new system
WAx = Wb.
This changes the solution from x̂ to x̂W.
The matrix WTW turns up on both sides of the weighted normal equations:
Weighted Normal Equations
(ATWTWA)x̂W = ATWTWb
x̂W = (ATWTWA)-1ATWTWb
Fundamental Subspaces
For an m×n matrix A of rank r, there are four fundamental subspaces.
■ Row space C(Aᵀ) ⊂ ℝⁿ, dim = r
■ Nullspace N(A) ⊂ ℝⁿ, dim = n−r
■ Column space C(A) ⊂ ℝᵐ, dim = r
■ Left nullspace N(Aᵀ) ⊂ ℝᵐ, dim = m−r
Axr = b | Axn = 0 | Ax = b
Hover over regions in the diagram to see mappings.
Rank-Nullity Theorem
The dimensions of the four subspaces satisfy two fundamental identities:
r + (n−r) = n (in ℝⁿ)
r + (m−r) = m (in ℝᵐ)
dim C(Aᵀ) + dim N(A) = n
dim C(A) + dim N(Aᵀ) = m
Row space and nullspace are orthogonal complements in ℝⁿ.
Column space and left nullspace are orthogonal complements in ℝᵐ.
Orthonormal
See the notes panel →
◆ Definition
Vectors q₁, q₂, ..., qₙ are orthonormal if:
qᵢᵀqⱼ = 0 (i ≠ j) qᵢᵀqᵢ = 1
QᵀQ = I
◆ Orthogonal Matrix Q
When Q is square, QᵀQ = I means Qᵀ = Q⁻¹:
QᵀQ = QQᵀ = I
Columns of Q are orthonormal
Q preserves lengths: ‖Qx‖ = ‖x‖
Q preserves dot products: (Qx)ᵀ(Qy) = xᵀy
◆ Projection with Orthonormal Basis
When columns of A are orthonormal (A = Q), projection simplifies:
P = QQᵀ
Least squares solution — no matrix inversion needed:
x̂ = Qᵀb
Projection onto a plane = sum of projections onto orthonormal q₁ and q₂:
p = (q₁ᵀb)q₁ + (q₂ᵀb)q₂ = QQᵀb
◆ Remark 1
Every vector b is the sum of its one-dimensional projections onto the lines through the q's:
b = (q₁ᵀb)q₁ + (q₂ᵀb)q₂ + ··· + (qₙᵀb)qₙ
= QQᵀb = b
◆ Remark 2
The rows of a square matrix are orthonormal whenever the columns are. Both QᵀQ = I and QQᵀ = I hold.
◆ Rectangular Matrices with Orthogonal Columns
For Qx = b when Q is not necessarily square — there may be more rows than columns. The n orthonormal vectors qᵢ in the columns of Q have m > n components. Then Q is an m × n matrix and we cannot expect to solve Qx = b exactly. We solve it by least squares.
Orthonormal columns should make the problem simple. We still have QᵀQ = I, so Qᵀ is still the left-inverse of Q.
The normal equations come from multiplying Ax = b by the transpose, giving AᵀAx̂ = Aᵀb. Now the normal equations are QᵀQx̂ = Qᵀb. But QᵀQ = I! Therefore:
x̂ = Qᵀb
Whether Q is square and x̂ is an exact solution, or Q is rectangular and we need least squares — the answer is always x̂ = Qᵀb.
The projection onto the column space of Q is:
p = Qx̂ = QQᵀb
◆ Gram-Schmidt Process
Converts independent vectors a, b into orthonormal q₁, q₂:
q₁ = a / ‖a‖
b* = b − (q₁ᵀb)q₁ q₂ = b* / ‖b*‖
Gives the QR decomposition: A = QR, where R is upper triangular.
Linear Algebra Visualizer — Interactive 3D Vector and Matrix Transformation Tool
A free interactive linear algebra visualization tool to learn linear algebra visually. Explore orthogonal vectors, dot product, inner product, matrix transformations, vector transformations, projection matrices, and the four fundamental subspaces in interactive 3D.
Orthogonal Vectors Visualization
Visualize orthogonal vectors in 3D space and understand the dot product visually. The inner product xᵀy = 0 when vectors are perpendicular. Learn how vectors work visually with this interactive vector visualization tool.
2D and 3D Vector Visualization — Types of Subspaces
Explore 2D vector visualization and 3D vector visualization of subspace types: the origin (0D), lines through the origin (1D), planes through the origin (2D), and the entire ℝ³ space (3D). Understand linear transformations interactively.
Orthogonal Subspaces — Inner Product Linear Algebra
Two subspaces S and T are orthogonal if every vector in S is orthogonal to every vector in T. The inner product linear algebra constraint is dim S + dim T ≤ n. Understand dot product visualization and vector transformation visually.
In ℝ³, two 2D planes must intersect along a line, proving that two planes cannot be orthogonal subspaces. Visualize matrix transformations online with this interactive tool.
The orthogonal complement V⊥ of a subspace V contains all vectors orthogonal to V. Every vector in ℝⁿ splits uniquely into a part in V and a part in V⊥. Use this interactive vector transformation tool to understand orthogonal vectors with visualization.
Projection Vector and Projection Matrix — Linear Transformations Interactive
The projection of vector b onto line a is p = (aᵀb / aᵀa)a. The projection matrix onto a line is P = aaᵀ / aᵀa. For projection onto a plane, P = A(AᵀA)⁻¹Aᵀ. The matrix P is symmetric and idempotent: P² = P. The Cauchy-Schwarz inequality |aᵀb| ≤ ‖a‖‖b‖ is derived from the error vector e = b − p. Dot product explained visually.
Four Fundamental Subspaces — Rank-Nullity Theorem
For an m×n matrix A of rank r, the four fundamental subspaces are: the row space C(Aᵀ) in ℝⁿ with dimension r, the nullspace N(A) in ℝⁿ with dimension n−r, the column space C(A) in ℝᵐ with dimension r, and the left nullspace N(Aᵀ) in ℝᵐ with dimension m−r. The rank-nullity theorem states: dim C(Aᵀ) + dim N(A) = n and dim C(A) + dim N(Aᵀ) = m. Learn linear algebra for beginners interactively.