The Little Book of Linear Algebra

2 hours ago 2

Chapter 1. Vectors, Scalars, and Geometry

  1. Scalars, vectors, and coordinate systems (what they are, why we care)
  2. Vector notation, components, and arrows (reading and writing vectors)
  3. Vector addition and scalar multiplication (the two basic moves)
  4. Linear combinations and span (building new vectors from old ones)
  5. Length (norm) and distance (how big and how far)
  6. Dot product (algebraic and geometric views)
  7. Angles between vectors and cosine (measuring alignment)
  8. Projections and decompositions (splitting along a direction)
  9. Cauchy–Schwarz and triangle inequalities (two fundamental bounds)
  10. Orthonormal sets in ℝ²/ℝ³ (nice bases you already know)

Chapter 2. Matrices and Basic Operations

  1. Matrices as tables and as machines (two mental models)
  2. Matrix shapes, indexing, and block views (seeing structure)
  3. Matrix addition and scalar multiplication (componentwise rules)
  4. Matrix–vector product (linear combos of columns)
  5. Matrix–matrix product (composition of linear steps)
  6. Identity, inverse, and transpose (three special friends)
  7. Symmetric, diagonal, triangular, and permutation matrices (special families)
  8. Trace and basic matrix properties (quick invariants)
  9. Affine transforms and homogeneous coordinates (translations included)
  10. Computing with matrices (cost counts and simple speedups)

Chapter 3. Linear Systems and Elimination

  1. From equations to matrices (augmenting and encoding)
  2. Row operations (legal moves that keep solutions)
  3. Row-echelon and reduced row-echelon forms (target shapes)
  4. Pivots, free variables, and leading ones (reading solutions)
  5. Solving consistent systems (unique vs. infinite solutions)
  6. Detecting inconsistency (when no solution exists)
  7. Gaussian elimination by hand (a disciplined procedure)
  8. Back substitution and solution sets (finishing cleanly)
  9. Rank and its first meaning (pivots as information)
  10. LU factorization (elimination captured as L and U)

Chapter 4. Vector Spaces and Subspaces

  1. Axioms of vector spaces (what “space” really means)
  2. Subspaces, column space, and null space (where solutions live)
  3. Span and generating sets (coverage of a space)
  4. Linear independence and dependence (no redundancy vs. redundancy)
  5. Basis and coordinates (naming every vector uniquely)
  6. Dimension (how many directions)
  7. Rank–nullity theorem (dimensions that add up)
  8. Coordinates relative to a basis (changing the “ruler”)
  9. Change-of-basis matrices (moving between coordinate systems)
  10. Affine subspaces (lines and planes not through the origin)

Chapter 5. Linear Transformations and Structure

  1. Linear transformations (preserving lines and sums)
  2. Matrix representation of a linear map (choosing a basis)
  3. Kernel and image (inputs that vanish; outputs we can reach)
  4. Invertibility and isomorphisms (perfectly reversible maps)
  5. Composition, powers, and iteration (doing it again and again)
  6. Similarity and conjugation (same action, different basis)
  7. Projections and reflections (idempotent and involutive maps)
  8. Rotations and shear (geometric intuition)
  9. Rank and operator viewpoint (rank beyond elimination)
  10. Block matrices and block maps (divide and conquer structure)

Chapter 6. Determinants and Volume

  1. Areas, volumes, and signed scale factors (geometric entry point)
  2. Determinant via linear rules (multilinearity, sign, normalization)
  3. Determinant and row operations (how each move changes det)
  4. Triangular matrices and product of diagonals (fast wins)
  5. det(AB) = det(A)det(B) (multiplicative magic)
  6. Invertibility and zero determinant (flat vs. full volume)
  7. Cofactor expansion (Laplace’s method)
  8. Permutations and sign (the combinatorial core)
  9. Cramer’s rule (solving with determinants, and when not to use it)
  10. Computing determinants in practice (use LU, mind stability)

Chapter 7. Eigenvalues, Eigenvectors, and Dynamics

  1. Eigenvalues and eigenvectors (directions that stay put)
  2. Characteristic polynomial (where eigenvalues come from)
  3. Algebraic vs. geometric multiplicity (how many and how independent)
  4. Diagonalization (when a matrix becomes simple)
  5. Powers of a matrix (long-term behavior via eigenvalues)
  6. Real vs. complex spectra (rotations and oscillations)
  7. Defective matrices and a peek at Jordan form (when diagonalization fails)
  8. Stability and spectral radius (grow, decay, or oscillate)
  9. Markov chains and steady states (probabilities as linear algebra)
  10. Linear differential systems (solutions via eigen-decomposition)

Chapter 8. Orthogonality, Least Squares, and QR

  1. Inner products beyond dot product (custom notions of angle)
  2. Orthogonality and orthonormal bases (perpendicular power)
  3. Gram–Schmidt process (constructing orthonormal bases)
  4. Orthogonal projections onto subspaces (closest point principle)
  5. Least-squares problems (fit when exact solve is impossible)
  6. Normal equations and geometry of residuals (why it works)
  7. QR factorization (stable least squares via orthogonality)
  8. Orthogonal matrices (length-preserving transforms)
  9. Fourier viewpoint (expanding in orthogonal waves)
  10. Polynomial and multifeature least squares (fitting more flexibly)

Chapter 9. SVD, PCA, and Conditioning

  1. Singular values and SVD (universal factorization)
  2. Geometry of SVD (rotations + stretching)
  3. Relation to eigen-decompositions (ATA and AAT)
  4. Low-rank approximation (best small models)
  5. Principal component analysis (variance and directions)
  6. Pseudoinverse (Moore–Penrose) and solving ill-posed systems
  7. Conditioning and sensitivity (how errors amplify)
  8. Matrix norms and singular values (measuring size properly)
  9. Regularization (ridge/Tikhonov to tame instability)
  10. Rank-revealing QR and practical diagnostics (what rank really is)

Chapter 10. Applications and Computation

  1. 2D/3D geometry pipelines (cameras, rotations, and transforms)
  2. Computer graphics and robotics (homogeneous tricks in action)
  3. Graphs, adjacency, and Laplacians (networks via matrices)
  4. Data preprocessing as linear ops (centering, whitening, scaling)
  5. Linear regression and classification (from model to matrix)
  6. PCA in practice (dimensionality reduction workflow)
  7. Recommender systems and low-rank models (fill the missing entries)
  8. PageRank and random walks (ranking with eigenvectors)
  9. Numerical linear algebra essentials (floating point, BLAS/LAPACK)
  10. Capstone problem sets and next steps (a roadmap to mastery)
Read Entire Article