Big Ideas in Applied Math: Smoothness and Degree of Approximation

At its core, computational mathematics is about representing the infinite by the finite. Even attempting to store a single arbitrary real number requires an infinite amount of memory to store its infinitely many potentially nonrepeating digits.1It is a well-known result that a real number has a repeating decimal expansion if, and only if, it is rational. When dealing with problems in calculus, however, the problem is even more severe as we want to compute with functions on the real line. In effect, a function is an uncountably long list of real numbers, one for each value of the function’s domain. We certainly cannot store an infinite list of numbers with infinitely many digits on a finite computer!

Thus, our only hope to compute with functions is to devise some sort of finite representation for them. The most natural representation is to describe function by a list of real numbers (and then store approximations of these numbers on our computer). For instance, we may approximate the function by a polynomial a_1 + a_2 x + \cdots + a_{M} x^{M-1} of degree M-1 and then store the M coefficients a_1,\ldots,a_{M}. From this list of numbers, we then can reconstitute an approximate version of the function. For instance, in our polynomial example, our approximate version of the function is just a_1 + a_2 x + \cdots + a_{M} x^{M-1}. Naturally, there is a tradeoff between the length of our list of numbers and how accurate our approximation is. This post is about that tradeoff.

The big picture idea is that the “smoother” a function is, the easier it will be to approximate it with a small number of parameters. Informally, we have the following rule of thumb: if a function f on a one-dimensional domain possesses s nice derivatives, then f can be approximated by a M-parameter approximation with error decaying at least as fast as 1/M^s. This basic result appears in many variants in approximation theory with different precise definitions of the term “s nice derivatives” and “M-parameter approximation”. Let us work out the details of the approximation problem in one concrete setting using Fourier series.

Approximation by Fourier Series

Consider a complex-valued and 2\pi-period function f defined on the real line. (Note that, by a standard transformation, there are close connections between approximation of 2\pi-periodic functions on the whole real line and functions defined on a compact interval [a,b].2Specifically, suppose that h is a function defined on [a,b]. Then define a function g on [-1,1] by g(x) = h((b-a)x + (b+a)). This linear rescaling of the domain is very simple and easy to understand. Now define a 2\pi-periodic function f on \mathbb{R} by f(\theta) = g(\cos(\theta)). There are very close connections between approximation of f and g. For example, Fourier cosine expansions of f are equivalent to Chebyshev polynomial expansions of g. For more on this subject, Trefethen’s Approximation Theory and Approximation Practice is an excellent reference.) If f is square-integrable, then f possesses a Fourier expansion

(1)   \begin{equation*} f(\theta) = \sum_{k=-\infty}^\infty} \hat{f}_k e^{{\rm i} k\theta}, \quad \hat{f}_k = \frac{1}{2\pi} \int_{-\pi}^\pi f(\theta) e^{-{\rm i}k\theta} \, d\theta. \end{equation*}

The infinite series converges in the L^2 sense, meaning that \lim_{m\to\infty} \|f - f_m\|_{L^2} = 0, where f_m is the truncated Fourier series f_M(\theta) = \sum_{k=-m}^m \hat{f}_k e^{{\rm i} k\theta} and \|\cdot\| is L^2 norm \|f\|_2 = \sqrt{\int_{-\pi}^\pi |f(\theta)|^2 \, d\theta.3One may show with considerable analysis that Fourier series converges in others senses, for example almost everywhere convergence. We also have the Plancherel theorem, which states that

(2)   \begin{equation*} \|f\|_{L^2}^2 = \int_{-\pi}^\pi |f(\theta)|^2\, d\theta = 2\pi \sum_{k=-\infty}^{\infty} |\hat{f}_k|^2. \end{equation*}

Note that the convergence of the Fourier series is fundamentally a statement about approximation. The fact that the Fourier series converges means that the truncated Fourier series f_m of 2m+1 terms acts as an arbitrarily close approximate of f (measured with the L^2 norm). However, the number M=2m+1 of terms we need to store might be quite large if a large value of M is needed for \|f - f_m\|_{L^2} to be small. However, as we shall soon see, if the function f is “smooth”, \|f - f_M\|_{L^2} will be small for even moderate values of M.

Smoothness

Often, in analysis, we refer to a function as smooth when it possesses derivatives of all orders. In one dimension, this means that the jth derivative f^{(j)} exists for every integer j \ge 0. In this post, we shall speak of smoothness as a more graded notion: loosely, a function g is smoother than a function f if g possesses more derivatives than f or the magnitude of g‘s derivatives are smaller. This conception of smoothness accords more with the plain-English definition of smoothness: the graph of a very mildly varying function g with a discontinuity in its 33rd derivative looks much smoother to the human eye than the graph of a highly oscillatory and jagged function f that nonetheless possesses derivatives of all orders.4One might refer to the precise concept of possessing derivatives of all orders as C^\infty smoothness.

For a function defined in terms of a Fourier series, it is natural to compute its derivative by formally differentiating the Fourier series term-by-term:

(3)   \begin{equation*} f'(\theta) = \sum_{k=-\infty}^\infty ({\rm i}k) \hat{f}_k e^{{\rm i} k\theta}. \end{equation*}

This formal Fourier series converges if, and only if, the L^2 norm of this putative derivative f', as computed with the Plancherel theorem, is finite: \|f'\|_{L^2}^2 = 2\pi \sum_{k=-\infty}^\infty |k|^2 |\hat{f}_k|^2 < \infty. This “derivative” f' may not be a derivative of f in the classical sense. For instance, using this definition, the absolute value function g(x) = |x| possesses a derivative g'(x) = +1 for x > 0 and g'(x) = -1 for x < 0. For the derivative of f to exist in the sense of Eq. (3), f need not be differentiable at every point, but it must define a square-integrable functions at the points where it is differentiable. We shall call the derivative f' as given by Eq. (3) to be a weak derivative of f.

If a function f has s square-integrable weak derivatives f^{(j)}(\theta) = \sum_{k=-\infty}^\infty ({\rm i}k)^j \hat{f}_k e^{{\rm i}k\theta} for 1\le j \le s, then we say that f belongs to the Sobolev space H^s. The Sobolev space H^s is equipped with the norm

(4)   \begin{equation*} \|f\|_{H^s} = \sqrt{\sum_{j=0}^s \|f^{(j)}\|_{L^2}^2}. \end{equation*}

The Sobolev norm \|\cdot\|_{H^s} is a quantitative measure of qualitative smoothness. The smaller the Sobolev norm H^s of f, the smaller the derivatives of f are. As we will see, we can use this to bound the approximation error.

Smoothness and Degree of Approximation

If f is approximated by f_m, the error of approximation is given by

(5)   \begin{equation*} f(\theta) - f_m(\theta) = \sum_{|k|>m} \hat{f}_k e^{{\rm i}k\theta}, \quad \|f - f_m\|_{L^2} = \sqrt{ \sum_{|k|>m} |\hat{f}_k|^2 }. \end{equation*}

Suppose that f \in H^s (that is, f has s square integrable derivatives). Then we may deduce the inequality

(6)   \begin{equation*} \begin{split} \|f - f_m\|_{L^2} &= \sqrt{ \sum_{|k|>m} |\hat{f}_k|^2 } \\ &= \sqrt{ \sum_{|k|>m} |k|^{-2s}|k|^{2s}|\hat{f}_k|^2 }\\ &\le m^{-s} \sqrt{ \sum_{|k|>m} |k|^{2s}|\hat{f}_k|^2 } \\ &\le m^{-s}\|f\|_{H^s}. \end{split} \end{equation*}

The first “\le” follows from the fact that |k|^{-2s} < m^{-2s} for |k|>m. This very important result is a precise instantiation of our rule of thumb from earlier: if f possesses s nice (i.e. square-integrable) derivatives, then the (L^2) approximation error for an M=2m+1-term Fourier approximation decays at least as fast as 1/M^s.

Higher Dimensions

The results for one dimension can easily be extended to consider functions f defined on d-dimensional space which are 2\pi-periodic in every argument.5e.g. f(\theta_1,\ldots, \theta_j+2\pi,\ldots,\theta_d) = f(\theta_1,\ldots,\theta_j,\ldots,\theta_d) for every 1\le j \le d For physics-based scientific simulation, we are often interested in d=2 or d=3, but for more modern problems in data science, we might be interested in very large dimensions d.

Letting \mathbb{Z}^d denote the set of all d-tuples of integers, one can show that one has the d-dimensional Fourier series

(7)   \begin{equation*} f(\theta) = \sum_{k \in \mathbb{Z}^d} f_{\hat k} e^{{\rm i}k\cdot \theta}, \quad \hat{f}_k = \frac{1}{(2\pi)^d} \int_{[-\pi,\pi]^d} f(\theta) e^{-{\rm i}k\cdot \theta} \, d\theta. \end{equation*}

Here, we denote k\cdot \theta to be the Euclidean inner product of the d-dimensional vectors k and \theta, k\cdot \theta = k_1\theta_1 + \cdots + k_d\theta_d. A natural generalization of the Plancherel theorem holds as well. Let \max |k| denote the maximum of |k_1|,\ldots,|k_d|. Then, we have the truncated Fourier series f_m = \sum_{\max |k| \le m} \hat{f}_k e^{{\rm i}k\cdot \theta}. Using the same calculations from the previous section, we deduce a very similar approximation property

(8)   \begin{equation*} \|f - f_m\|_{L^2} = \sqrt{ \sum_{\max|k|>m} |\hat{f}_k|^2 }  \le m^{-s}\|f\|_{H^s}. \end{equation*}

There’s a pretty big catch though. The approximate function f_m possesses M = (2m+1)^d terms! In order to include each of the first m Fourier modes in each of d dimensions, the number of terms M in our Fourier approximation must grow exponentially in d! In particular, the approximation error satisfies a bound

(9)   \begin{equation*} \|f - f_m\|_{L^2}  \le \left(\frac{M^{1/d}-1}{2}\right)^{-s}\|f\|_{H^s} \le C(s,d)M^{-s/d} \|f\|_{H^s}, \end{equation*}

where C(s,d) \ge 0 is a constant depending only on s and d.

This is the so-called curse of dimensionality: to approximate a function in d dimensions, we need exponentially many terms in the dimension d. In higher-dimensions, our rule of thumb needs to be modified: if a function f on a d-dimensional domain possesses s nice derivatives, then f can be approximated by a M-parameter approximation with error decaying at least as fast as 1/M^{s/d}.

The Theory of Nonlinear Widths: The Speed Limit of Approximation Theory

So far, we have shown that if one approximates a function f on a d-dimensional space by truncating its Fourier series to M terms, the approximation error decays at least as fast as 1/M^{s/d}. Can we do better than this, particularly in high-dimensions where the error decay can be very slow if s \ll d?

One must be careful about how one phrases this question. Suppose I ask “what is the best way of approximating a function f“? A subversive answer is that we may approximate f by a single-parameter approximation of the form \hat{f} = af with a=1! Consequently, there is a one-parameter approximation procedure that approximates every function perfectly. The problem with this one-parameter approximation is obvious: the one-parameter approximation is terrible at approximating most functions different than f. Thus, the question “what is the best way of approximating a particular function?” is ill-posed. We must instead ask the question “what is the best way of approximating an entire class of functions?” For us, the class of functions shall be those which are sufficiently smooth: specifically, we shall consider the class of functions whose Sobolev norm satisfies a bound \|f\|_{H^s} \le B. Call this class W.

As outlined at the beginning, an approximation procedure usually begins by taking the function f and writing down a list of M numbers a_1,\ldots,a_M. Then, from this list of numbers we reconstruct a function \hat{f} which serves as an approximation to f. Formally, this can be viewed as a mathematical function \Phi which takes f \in W to a tuple (a_1,\ldots,a_M) \in \mathbb{C}^d followed by a function \Lambda which takes (a_1,\ldots,a_M) and outputs a continuous 2\pi-periodic function \hat{f} \in C_{2\pi}(\mathbb{R}).

(10)   \begin{equation*} f \stackrel{\Phi}{\longmapsto}(a_1,\ldots,a_M)\stackrel{\Lambda}{\longmapsto} \hat{f}, \quad \Phi : W \to \mathbb{C}^M, \quad \Lambda : \mathbb{C}^M \to C_{2\pi}(\mathbb{R}). \end{equation*}

Remarkably, there is a mathematical theory which gives sharp bounds on the expressive power of any approximation procedure of this type. This theory of nonlinear widths serves as a sort of speed limit in approximation theory: no method of approximation can be any better than the theory of nonlinear widths says it can. The statement is somewhat technical, and we advise the reader to look up a precise statement of the result before using it any serious work. Roughly, the theory of nonlinear widths states that for any continuous approximation procedure6That is, an approximation procedure for which \Phi : W \to \mathbb{C}^d is a continuous function where W is equipped with the topology defined by the norm \|\cdot\|_{H^s}. that is able to approximate every function in W with L^2 approximation error no more than \epsilon, the number of parameters M must be at least some constant multiple of \epsilon^{-d/s}. Equivalently, the worst-case approximation error for a function in W with an M parameter continuous approximation is at least some multiple of 1/M^{s/d}.

In particular, the theory of nonlinear widths states that the approximation property of truncated Fourier series are as good as any method for approximating functions in the class W, as they exactly meet the “speed limit” given by the theory of nonlinear widths. Thus, approximating using truncated Fourier series is, in a certain very precise sense, as good as any other approximation technique you can think of in approximating arbitrary functions from W: splines, rational functions, wavelets, and artificial neural networks must follow the same speed limit. Make no mistake, these other methods have definite advantages, but degree of approximation for the class W is not one of them. Also, note that the theory of nonlinear widths shows that the curse of dimensionality is not merely an artifact of Fourier series; it affects all high-dimensional approximation techniques.

For the interested reader, see the following footnotes for two important ways one may perform approximations better than the theory of nonlinear widths within the scope of its rules.7The theory of nonlinear widths holds for continuous methods of approximation. This means that discontinuous approximation procedures may circumvent its bounds. Indeed, such discontinuous approximation procedures exist using probabilistic techniques. These methods are of questionable use in practice since discontinuous approximation procedures, by their nature, are extremely sensitive to the perturbations which are ubiquitous in performing computations on computers.8The theory of nonlinear widths holds for means of approximating the entire class W. More efficient methods may exist for meaningful subclasses of W. For instance, Mhaskar and Poggio show that for functions f satisfying a compositional property, that they can effectively be approximated by multilayer artificial neural networks.

Upshot: The smoother a function is, the better it can be approximated. Specifically, one can approximate a function on d dimensions with s nice derivatives with approximation error decaying with rate at least 1/M^{s/d}. In the case of 2\pi-periodic functions, such an approximation can easily be obtained by truncating the function’s Fourier series. This error decay rate is the best one can hope for to approximate all functions of this type.

Big Ideas in Applied Math: The Schur Complement

Given the diversity of applications of mathematics, the field of applied mathematics lacks a universally accepted set of core concepts which most experts would agree all self-proclaimed applied mathematicians should know. Further, much mathematical writing is very carefully written, and many important ideas can be obscured by precisely worded theorems or buried several steps into a long proof.

In this series of blog posts, I hope to share my personal experience with some techniques in applied mathematics which I’ve seen pop up many times. My goal is to isolate a single particularly interesting idea and provide a simple explanation of how it works and why it can be useful. In doing this, I hope to collect my own thoughts on these topics and write an introduction to these ideas of the sort I wish I had when I was first learning this material.

Given my fondness for linear algebra, I felt an appropriate first topic for this series would be the Schur Complement. Given matrices A, B, C, and D of sizes n\times n, n\times m, m\times n, and m\times m with A invertible, the Schur complement is defined to be the matrix D - CA^{-1}B.

The Schur complement naturally arises in block Gaussian elimination. In vanilla Gaussian elimination, one begins by using the (1,1)-entry of a matrix to “zero out” its column. Block Gaussian elimination extends this idea by using the n\times n submatrix occupying the top-left portion of a matrix to “zero out” all of the first n columns together. Formally, given the matrix \begin{bmatrix} A & B \\ C & D \end{bmatrix}, one can check by carrying out the multiplication that the following factorization holds:

(1)   \begin{equation*} \begin{bmatrix} A & B \\ C & D \end{bmatrix} = \begin{bmatrix} I_n & 0_{n\times m} \\ CA^{-1} & I_m\end{bmatrix} \begin{bmatrix} A & B \\ 0_{m\times n} & D - CA^{-1}B \end{bmatrix}. \end{equation*}

Here, we let I_j denote an identity matrix of size j\times j and 0_{j\times k} the j\times k zero matrix. Here, we use the notation of block (or partitioned) matrices where, in this case, a (m+n)\times (m+n) matrix is written out as a 2\times 2 “block” matrix whose entries themselves are matrices of the appropriate size that all matrices occurring in one block row (or column) have the same number of rows (or columns). Two block matrices which are blocked in a compatible way can be multiplied just like two regular matrices can be multiplied, taking care of the noncommutativity of matrix multiplication.

The Schur complement naturally in the expression for the inverse of \begin{bmatrix} A & B \\ C & D\end{bmatrix}. One can verify that for a block triangular matrix M = \begin{bmatrix} M_{11} & M_{12} \\ 0_{m\times n} & M_{22}\end{bmatrix}, we have the inverse formula

(2)   \begin{equation*} M^{-1} = \begin{bmatrix} M_{11} & M_{12} \\ 0_{m\times n} & M_{22}\end{bmatrix}^{-1} = \begin{bmatrix} M_{11}^{-1} & -M_{11}^{-1} M_{12}M_{22}^{-1} \\ 0_{m\times n} & M_{22}^{-1}\end{bmatrix}. \end{equation*}

(This can be verified by carrying out the block multiplication MM^{-1} for the proposed formula for M^{-1} and verifying that one obtains the identity matrix.) A similar formula holds for block lower triangular matrices. From here, we can deduce a formula for the inverse of \begin{bmatrix} A & B \\ C & D\end{bmatrix}. Let S = D - BA^{-1}C be the Schur complement. Then

(3)   \begin{equation*} \begin{split} \begin{bmatrix} A & B \\ C & D \end{bmatrix}^{-1} &= \begin{bmatrix} A & B \\ 0_{m\times n} & S \end{bmatrix}^{-1} \begin{bmatrix} I_n & 0_{n\times m} \\ CA^{-1} & I_m\end{bmatrix}^{-1} \\ &= \begin{bmatrix} A^{-1} & -A^{-1}BS^{-1} \\ 0_{m\times n} & S^{-1} \end{bmatrix} \begin{bmatrix} I_n & 0_{n\times m} \\ -CA^{-1} & I_m\end{bmatrix} \\ &= \begin{bmatrix} A^{-1} + A^{-1}BS^{-1}CA^{-1} & -A^{-1}BS^{-1} \\ -S^{-1}CA^{-1} & S^{-1} \end{bmatrix}. \end{split} \end{equation*}

This remarkable formula gives the inverse of \begin{bmatrix} A & B \\ C & D\end{bmatrix} in terms of A^{-1}, S^{-1}, B, and C. In particular, the (2,2)-block entry of \begin{bmatrix} A & B \\ C & D\end{bmatrix}^{-1} is simply just the inverse of the Schur complement.

Here, we have seen that if one starts with a large matrix and performs block Gaussian elimination, one ends up with a smaller matrix called the Schur complement whose inverse appears in inverse of the original matrix. Very often, however, it benefits us to run this trick in reverse: we begin with a small matrix, which we recognize to be the Schur complement of a larger matrix. In general, dealing with a larger matrix is more difficult than a smaller one, but very often this larger matrix will have special properties which allow us to more efficiently compute the inverse of the original matrix.

One beautiful application of this idea is the Sherman-Morrison-Woodbury matrix identity. Suppose we want to find the inverse of the matrix A - CD^{-1}B. Notice that this is the Schur complement of the matrix \begin{bmatrix} D & C \\ B & A \end{bmatrix}, which is the same \begin{bmatrix} A & B \\ C & D \end{bmatrix} after reordering.1Specifically, move the switch the first n rows with the last m rows and do the same with the columns. This defines a permutation matrix P = \begin{bmatrix} 0_{m\times n} & I_m \\ I_n & 0_{n\times m} \end{bmatrix} such that \begin{bmatrix} D & C \\ B & A \end{bmatrix} = P\begin{bmatrix} A & B \\ C & D \end{bmatrix}P^\top. Alternately, and perhaps more cleanly, one may define two Schur complements of the block matrix M = \begin{bmatrix} A & B \\ C & D \end{bmatrix}: one by “eliminating A“, M/A = D - CA^{-1}B, and the other by “eliminating D“, M/D = A - BD^{-1}C. Following the calculation in Eq. (3), just like the inverse of the Schur complement D - BA^{-1}C appears in the (2,2) entry of \begin{bmatrix} A & B \\ C & D \end{bmatrix}^{-1}, the inverse of the alternate Schur complement A - CD^{-1}B can be shown to appear in the (1,1) entry of \begin{bmatrix} A & B \\ C & D \end{bmatrix}^{-1}. Thus, comparing with Eq. (3), we deduce the Sherman-Morrison-Woodbury matrix identity:

(4)   \begin{equation*} (A - CD^{-1}B)^{-1} = A^{-1} + A^{-1}C(D-BA^{-1}C)^{-1}BA^{-1}. \end{equation*}

To see how this formula can be useful in practice, suppose that we have a fast way of solving the system linear equations Ax = b. Perhaps A is a simple matrix like a diagonal matrix or we have already pre-computed an LU factorization for A. Consider the problem of solving the rank-one updated problem (A+uv^\top)x = b. Using the Sherman-Morrison-Woodbury identity with C=u, D=-1, and B = v^\top, we have that

(5)   \begin{equation*} x = (A+uv^\top)^{-1}b = A^{-1}b + A^{-1}u (-1-v^\top A^{-1}u)^{-1}v^\top A^{-1}b, \end{equation*}

Careful observation of this formula shows how we can compute x (solving (A+uv^\top)x = b) by only solving two linear systems Ax_1 = b for x_1 = A^{-1}b and Ax_2 = u for x_2 = A^{-1}u.2Further economies can be saved if one has already previously computed x_1, which may be the case in many applications.

Here’s another variant of the same idea. Suppose we want solve the linear system of equation (D + uv^\top)x = b where D is a diagonal matrix. Then we can immediately write down the lifted system of linear equations

(6)   \begin{equation*} \underbrace{\begin{bmatrix} -1 & v^\top \\ u & D \end{bmatrix}}_{:=M}\begin{bmatrix} y \\ x \end{bmatrix} = \begin{bmatrix} 0 \\ b \end{bmatrix}. \end{equation*}

One can easily see that D+uv^\top is the Schur complement of the matrix M (with respect to the (1,1) block). This system of linear equations is sparse in the sense that most of its entries are zero and can be efficiently solved by sparse Gaussian elimination, for which there exists high quality software. Easy generalizations of this idea can be used to effectively solve many “sparse + low-rank” problems.

Another example of the power of the Schur complement are in least-squares problems. Consider the problem of minimizing \|Ax - b\|, where A is a matrix with full column rank and \|\cdot\| is the Euclidean norm of a vector \|x\|^2 = x^\top x. It is well known that the solution x satisfies the normal equations A^\top A x = A^\top b. However, if the matrix A is even moderately ill-conditioned, the matrix A^\top A will be much more ill-conditioned (the condition number will be squared), leading to a loss of accuracy. It is for this reason that it is preferable to solve the least-squares problem with QR factorization. However, if QR factorization isn’t available, we can use the Schur complement trick instead. Notice that A^\top A is the Schur complement of the matrix \begin{bmatrix} -I_{m} & A \\ A^\top & 0_{n\times n} \end{bmatrix}. Thus, we can solve the normal equations by instead solving the (potentially, see footnote) much better-conditioned system3More precisely, one should scale the identity in the (1,1) block of this system to be on the order of the size of the entries in A. The conditioning is sensitive to the scaling. If one selects a scale s = 1/\sqrt{2}\cdot \sigma_{\rm min}(A) to be proportional to the smallest singular value \sigma_{\rm min}(A) of A and constructs M = \begin{bmatrix} -sI_m & A \\ A^\top & 0_{n\times n}\end{bmatrix}, then one can show that the two-norm condition number of M no more than twice that of A. If one picks s to be roughly equal to the largest singular value \sigma_{\rm max}(A), then the two-norm condition number of M is roughly the square of the condition number of A. The accuracy of this approach may be less sensitive to the scaling parameter than this condition number analysis suggests.

(7)   \begin{equation*} \begin{bmatrix} -I_{m} & A \\ A^\top & 0_{n\times n} \end{bmatrix} \begin{bmatrix} r \\ x \end{bmatrix} = \begin{bmatrix} b \\ 0_n \end{bmatrix}. \end{equation*}

In addition to (often) being much better-conditioned, this system is also highly interpretable. Multiplying out the first block row gives the equation -r + Ax = b which simplifies to r = Ax - b. The unknown r is nothing but the least-squares residual. The second block row gives A^\top r = 0_n, which encodes the condition that the residual is orthogonal to the range of the matrix A. Thus, by lifting the normal equations to a large system of equations by means of the Schur complement trick, one derives an interpretable way of solving the least-squares problem by solving a linear system of equations, no QR factorization or ill-conditioned normal equations needed.

The Schur complement trick continues to have use in areas of more contemporary interest. For example, the Schur complement trick plays a central role in the theory of sequentially semiseparable matrices which is a precursor to many recent developments in rank-structured linear solvers. I have used the Schur complement trick myself several times in my work on graph-induced rank-structures.

Upshot: The Schur complement appears naturally when one does (block) Gaussian elimination on a matrix. One can also run this process in reverse: if one recognizes a matrix expression (involving a product of matrices potentially added to another matrix) as being a Schur complement of a larger matrix , one can often get considerable dividends by writing this larger matrix down. Examples include a proof of the Sherman-Morrison-Woodbury matrix identity (see Eqs. (3-4)), techniques for solving a low-rank update of a linear system of equations (see Eqs. (5-6)), and a stable way of solving least-squares problems without the need to use QR factorization (see Eq. (7)).

Additional resource: My classmate Chris Yeh has a great introduction to the Schur complements focusing more on positive semidefinite and rank-deficient matrices.


Edits: This blog post was edited to clarify the conditioning of the augmented linear system Eq. (7) and to include a reference to Chris’ post on Schur complements.

Book Review: Matrix Theory by Fuzhen Zhang

Linear algebra and matrix theory is a difficult subject to learn, with the landscape of textbooks balkanized into matrix-oriented introductory texts oriented towards first- and second-year engineering and science students, vector space-oriented intermediate texts geared towards one of the first proof-based mathematics courses for a mathematics student, and more advanced texts like Bhatia’s Matrix Analysis which are demanding and challenging mathematical journeys intended for graduate students and researchers. For the student interested in numerical analysis and scientific computation, there can be a huge gap from, say, Gilbert Strang’s Introduction to Linear Algebra or Sheldon Axler’s Linear Algebra Done Right to Rajendra Bhatia’s Matrix Analysis.

Matrix Theory: Basic Results and Techniques by Fuzhen Zhang beautifully fills this gap.

I discovered the book by chance and was pleasantly surprised to find a book packed with useful insights. The book’s subtitle, basic results and techniques, is appropriate, as the book places a large emphasis on developing the reader’s “matrix kung-fu” as the author describes it. In a particularly striking example of this early in the book, Zhang furnishes four proofs that AB and BA have the same nonzero eigenvalues. This is an act of pedagogical brilliance that more textbooks would be wise to replicate. By presenting multiple proofs, the author shows the reader that there are often many paths to the same result. At the same time, Zhang shows the reader the benefit of having a wide toolbox by showing how one of the four methods is unable to give any information about the multiplicity of the eigenvalues, while the others are. Zhang’s careful focus on techniques is careful and illuminating, demonstrating many different examples when the proof technique is useful as well as examples where the technique falls short in solving a problem.  Rather than merely teaching results, Matrix Theory teaches its reader tools with which to derive results.

One of the tools the book places emphasis on right from the start are block (or partitioned) matrices. To me, reasoning about matrices and linear operators by using 2\times 2 blocks is an absolutely indispensable tool, which I only learned after having taken three courses on linear algebra. Here, the emphasis is appropriately on block matrices throughout the book, beginning with the second chapter which is entirely dedicated to this useful approach.

The book has excellent coverage, stretching from introductory results through canonical forms to Kronecker and Hadamard products to sections on most important classes of matrices (e.g. nilpotent, tridiagonal, circulant, Vandermonde, Hadamard, doubly stochastic, nonnegative, unitary, contraction, positive (semi)definite, Hermitian, and normal matrices) to majorization and matrix inequalities. The book’s excellent exposition is complemented by an excellent suite of appropriately difficult exercises.

The book is a terrific reference on the tips and tricks of working with matrices and is excellent preparation for a more advanced monograph on matrix theory. I cannot endorse enough this underappreciated gem of a textbook.