The Little Book of Linear Algebra
2 hours ago
2
Chapter 1. Vectors, Scalars, and Geometry
- Scalars, vectors, and coordinate systems (what they are, why we care)
- Vector notation, components, and arrows (reading and writing vectors)
- Vector addition and scalar multiplication (the two basic moves)
- Linear combinations and span (building new vectors from old ones)
- Length (norm) and distance (how big and how far)
- Dot product (algebraic and geometric views)
- Angles between vectors and cosine (measuring alignment)
- Projections and decompositions (splitting along a direction)
- Cauchy–Schwarz and triangle inequalities (two fundamental bounds)
- Orthonormal sets in ℝ²/ℝ³ (nice bases you already know)
Chapter 2. Matrices and Basic Operations
- Matrices as tables and as machines (two mental models)
- Matrix shapes, indexing, and block views (seeing structure)
- Matrix addition and scalar multiplication (componentwise rules)
- Matrix–vector product (linear combos of columns)
- Matrix–matrix product (composition of linear steps)
- Identity, inverse, and transpose (three special friends)
- Symmetric, diagonal, triangular, and permutation matrices (special families)
- Trace and basic matrix properties (quick invariants)
- Affine transforms and homogeneous coordinates (translations included)
- Computing with matrices (cost counts and simple speedups)
Chapter 3. Linear Systems and Elimination
- From equations to matrices (augmenting and encoding)
- Row operations (legal moves that keep solutions)
- Row-echelon and reduced row-echelon forms (target shapes)
- Pivots, free variables, and leading ones (reading solutions)
- Solving consistent systems (unique vs. infinite solutions)
- Detecting inconsistency (when no solution exists)
- Gaussian elimination by hand (a disciplined procedure)
- Back substitution and solution sets (finishing cleanly)
- Rank and its first meaning (pivots as information)
- LU factorization (elimination captured as L and U)
Chapter 4. Vector Spaces and Subspaces
- Axioms of vector spaces (what “space” really means)
- Subspaces, column space, and null space (where solutions live)
- Span and generating sets (coverage of a space)
- Linear independence and dependence (no redundancy vs. redundancy)
- Basis and coordinates (naming every vector uniquely)
- Dimension (how many directions)
- Rank–nullity theorem (dimensions that add up)
- Coordinates relative to a basis (changing the “ruler”)
- Change-of-basis matrices (moving between coordinate systems)
- Affine subspaces (lines and planes not through the origin)
Chapter 5. Linear Transformations and Structure
- Linear transformations (preserving lines and sums)
- Matrix representation of a linear map (choosing a basis)
- Kernel and image (inputs that vanish; outputs we can reach)
- Invertibility and isomorphisms (perfectly reversible maps)
- Composition, powers, and iteration (doing it again and again)
- Similarity and conjugation (same action, different basis)
- Projections and reflections (idempotent and involutive maps)
- Rotations and shear (geometric intuition)
- Rank and operator viewpoint (rank beyond elimination)
- Block matrices and block maps (divide and conquer structure)
Chapter 6. Determinants and Volume
- Areas, volumes, and signed scale factors (geometric entry point)
- Determinant via linear rules (multilinearity, sign, normalization)
- Determinant and row operations (how each move changes det)
- Triangular matrices and product of diagonals (fast wins)
- det(AB) = det(A)det(B) (multiplicative magic)
- Invertibility and zero determinant (flat vs. full volume)
- Cofactor expansion (Laplace’s method)
- Permutations and sign (the combinatorial core)
- Cramer’s rule (solving with determinants, and when not to use it)
- Computing determinants in practice (use LU, mind stability)
Chapter 7. Eigenvalues, Eigenvectors, and Dynamics
- Eigenvalues and eigenvectors (directions that stay put)
- Characteristic polynomial (where eigenvalues come from)
- Algebraic vs. geometric multiplicity (how many and how independent)
- Diagonalization (when a matrix becomes simple)
- Powers of a matrix (long-term behavior via eigenvalues)
- Real vs. complex spectra (rotations and oscillations)
- Defective matrices and a peek at Jordan form (when diagonalization fails)
- Stability and spectral radius (grow, decay, or oscillate)
- Markov chains and steady states (probabilities as linear algebra)
- Linear differential systems (solutions via eigen-decomposition)
Chapter 8. Orthogonality, Least Squares, and QR
- Inner products beyond dot product (custom notions of angle)
- Orthogonality and orthonormal bases (perpendicular power)
- Gram–Schmidt process (constructing orthonormal bases)
- Orthogonal projections onto subspaces (closest point principle)
- Least-squares problems (fit when exact solve is impossible)
- Normal equations and geometry of residuals (why it works)
- QR factorization (stable least squares via orthogonality)
- Orthogonal matrices (length-preserving transforms)
- Fourier viewpoint (expanding in orthogonal waves)
- Polynomial and multifeature least squares (fitting more flexibly)
Chapter 9. SVD, PCA, and Conditioning
- Singular values and SVD (universal factorization)
- Geometry of SVD (rotations + stretching)
- Relation to eigen-decompositions (ATA and AAT)
- Low-rank approximation (best small models)
- Principal component analysis (variance and directions)
- Pseudoinverse (Moore–Penrose) and solving ill-posed systems
- Conditioning and sensitivity (how errors amplify)
- Matrix norms and singular values (measuring size properly)
- Regularization (ridge/Tikhonov to tame instability)
- Rank-revealing QR and practical diagnostics (what rank really is)
Chapter 10. Applications and Computation
- 2D/3D geometry pipelines (cameras, rotations, and transforms)
- Computer graphics and robotics (homogeneous tricks in action)
- Graphs, adjacency, and Laplacians (networks via matrices)
- Data preprocessing as linear ops (centering, whitening, scaling)
- Linear regression and classification (from model to matrix)
- PCA in practice (dimensionality reduction workflow)
- Recommender systems and low-rank models (fill the missing entries)
- PageRank and random walks (ranking with eigenvectors)
- Numerical linear algebra essentials (floating point, BLAS/LAPACK)
- Capstone problem sets and next steps (a roadmap to mastery)
-
Homepage
-
Technology
- The Little Book of Linear Algebra