Big Ideas in Applied Math: Sparse Matrices

Sparse matrices are an indispensable tool for anyone in computational science. I expect there are a very large number of simulation programs written in scientific research across the country which could be faster by ten to a hundred fold at least just by using sparse matrices! In this post, we’ll give a brief overview what a sparse matrix is and how we can use them to solve problems fast.

A matrix is sparse if most of its entries are zero. There is no precise threshold for what “most” means; Kolda suggests that a matrix have at least 90% of its entries be zero for it to be considered sparse. The number of nonzero entries in a sparse matrix A is denoted by \operatorname{nnz}(A). A matrix that is not sparse is said to be dense.

Sparse matrices are truly everywhere. They occur in finite difference, finite element, and finite volume discretizations of partial differential equations. They occur in power systems. They occur in signal processing. They occur in social networks. They occur in intermediate stages in computations with dense rank-structured matrices. They occur in data analysis (along with their higher-order tensor cousins).

Why are sparse matrices so common? In a word, locality. If the ijth entry a_{ij} of a matrix A is nonzero, then this means that row i and column j are related in some way to each other according to the the matrix A. In many situations, a “thing” is only related to a handful of other “things”; in heat diffusion, for example, the temperature at a point may only depend on the temperatures of nearby points. Thus, if such a locality assumption holds, every row will only have a small number of nonzero entries and the matrix overall will be sparse.

Storing and Multiplying Sparse Matrices

A sparse matrix can be stored efficiently by only storing its nonzero entries, along with the row and column in which these entries occur. By doing this, a sparse matrix can be stored in \mathcal{O}(\operatorname{nnz}(A)) space rather than the standard \mathcal{O}(N^2) for an N\times N matrix A.1Here, \mathcal{O}(\cdot) refers to big-O notation. For the efficiency of many algorithms, it will be very beneficial to store the entries row-by-row or column-by-column using compressed sparse row and column (CSR and CSC) formats; most established scientific programming software environments support sparse matrices stored in one or both of these formats. For efficiency, it is best to enumerate all of the nonzero entries for the entire sparse matrix and then form the sparse matrix using a compressed format all at once. Adding additional entries one at a time to a sparse matrix in a compressed format requires reshuffling the entire data structure for each new nonzero entry.

There exist straightforward algorithms to multiply a sparse matrix A stored in a compressed format with a vector x to compute the product b = Ax. Initialize the vector b to zero and iterate over the nonzero entries a_{ij} of A, each time adding a_{ij}x_j to b_i. It is easy to see this algorithm runs in \mathcal{O}(\operatorname{nnz}(A)) time.2More precisely, this algorithm takes \mathcal{O}(\operatorname{nnz}(A) + N) time since it requires \mathcal{O}(N) operations to initialize the vector b even if A has no nonzero entries. We shall ignore this subtlety in the remainder of this article and assume that \operatorname{nnz}(A) = \Omega(N), which is true of most sparse matrices occurring in practice The fact that sparse matrix-vector products can be computed quickly makes so-called Krylov subspace iterative methods popular for solving linear algebraic problems involving sparse matrices, as these techniques only interact with the matrix A by computing matrix-vector products x \mapsto Ax (or matrix-tranpose-vector products y \mapsto A^\top y).

Lest the reader think that every operation with a sparse matrix is necessarily fast, the product of two sparse matrices A and B need not be sparse and the time complexity need not be \mathcal{O}(\operatorname{nnz}(A) + \operatorname{nnz}(B)). A counterexample is

(1)   \begin{equation*} A = B^\top = \begin{bmatrix} a_1 & 0 & \cdots & 0 \\ a_2 & 0 & \cdots & 0 \\ \vdots & \vdots & \ddots & 0 \\ a_n & 0 & \cdots & 0 \end{bmatrix} \in \mathbb{R}^n \end{equation*}

for a_1,\ldots,a_n \ne 0. We have that \operatorname{nnz}(A) = \operatorname{nnz}(B) = N but

(2)   \begin{equation*} AB = \begin{bmatrix} a_1^2 & a_1 a_2 & \cdots & a_1a_N \\ a_2a_1 & a_2^2 & \cdots & a_2a_N \\ \vdots & \vdots & \ddots & \vdots \\ a_Na_1 & a_Na_2 & \cdots & a_N^2 \end{bmatrix} \end{equation*}

which has \operatorname{nnz}(AB) = N^2 nonzero elements and requires \mathcal{O}(N^2) operations to compute. However, if one does the multiplication in the other order, one has \operatorname{nnz}(BA) = 1 and the multiplication can be done in \mathcal{O}(N) operations. Thus, some sparse matrices can be multiplied fast and others can’t. This phenomena of different speeds for different sparse matrices is very much also true for solving sparse linear systems of equations.

Solving Sparse Linear Systems

The question of how to solve a sparse system of linear equations Ax = b where A is sparse is a very deep problems with fascinating connections to graph theory. For this article, we shall concern ourselves with so-called sparse direct methods, which solve Ax = b by means of computing a factorization of the sparse matrix A. These methods produce an exact solution to the system Ax = b if all computations are performed exactly and are generally considered more robust than inexact and iterative methods. As we shall see, there are fundamental limits on the speed of certain sparse direct methods, which make iterative methods very appealing for some problems.

Note from the outset that our presentation will be on illustrating the big ideas rather than presenting the careful step-by-step details needed to actually code a sparse direct method yourself. An excellent reference for the latter is Tim Davis’ wonderful book Direct Methods for Sparse Linear Systems.

Let us begin by reviewing how LU factorization works for general matrices. Suppose that the (1,1) entry of A is nonzero. Then, LU factorization proceeds by subtracting scaled multiples of the first row from the other rows to zero out the first column. If one keeps track of these scaling, then one can write this process as a matrix factorization, which we may demonstrate pictorially as

(3)   \begin{equation*} \underbrace{\begin{bmatrix} * & * & \cdots & * \\ * & * & \cdots & * \\ \vdots & \vdots & \ddots & \vdots \\ * & * & \cdots & *\end{bmatrix}}_{=A} = \begin{bmatrix} 1 & & &\\ * & 1 & & \\ \vdots & & \ddots & \\ * & & & 1\end{bmatrix} \begin{bmatrix} * & * & \cdots & * \\ & * & \cdots & * \\ & \vdots & \ddots & \vdots \\ & * & \cdots & *\end{bmatrix}. \end{equation*}

Here, *‘s denote nonzero entries and blanks denote zero entries. We then repeat the process on the (N-1)\times (N-1) submatrix in the bottom right (the so-called Schur complement). Continuing in this way, we eventually end up with a complete LU factorization

(4)   \begin{equation*} \underbrace{\begin{bmatrix} * & * & \cdots & * \\ * & * & \cdots & * \\ \vdots & \vdots & \ddots & \vdots \\ * & * & \cdots & *\end{bmatrix}}_{=A} = \underbrace{\begin{bmatrix} 1 & & &\\ * & 1 & & \\ \vdots & \vdots & \ddots & \\ * &* &\cdots & 1\end{bmatrix}}_{=L} \underbrace{\begin{bmatrix} * & * & \cdots & * \\ & * & \cdots & * \\ &  & \ddots & \vdots \\ & & & *\end{bmatrix}}_{=U}. \end{equation*}

In the case that A is symmetric positive definite (SPD), one has that U = DL^\top for D a diagonal matrix consisting of the entries on U. This factorization A = LDL^\top is a Cholesky factorization of A.3Often, the Cholesky factorization is written as A = TT^\top for T=LD^{1/2} or A = RR^\top for R = D^{-1/2}U. These different forms all contain the same basic information, so we shall stick with the LDL^\top formulation in this post. For general non-SPD matrices, one needs to incorporate partial pivoting for Gaussian elimination to produce accurate results.4See the excellent monograph Accuracy and Stability of Numerical Algorithms for a comprehensive treatment of this topic.

Let’s try the same procedure for a sparse matrix. Consider a sparse matrix with the following sparsity pattern:

(5)   \begin{equation*} A = \begin{bmatrix} * & * & * & & * \\ * & * & & * & \\ * & & * & & \\ & * & & * & \\ * & & &  & * \end{bmatrix}. \end{equation*}

When we eliminate the (1,1) entry, we get the following factorization:

(6)   \begin{equation*} \underbrace{\begin{bmatrix} * & * & * & & * \\ * & * & & * & \\ * & & * & & \\ & * & & * & \\ * & & &  & * \end{bmatrix}}_{=A} = \begin{bmatrix} 1 & & & & \\ * & 1& & &  \\ * & & 1 & & \\  & & & 1 & \\ * & & &  & 1 \end{bmatrix} \begin{bmatrix} * & * & * & & * \\ & * & \bullet & * & \bullet \\ & \bullet & * & & \bullet \\ & * & & * & \\ & \bullet & \bullet &  & * \end{bmatrix} \end{equation*}

Note that the Schur complement has new additional nonzero entries (marked with a \bullet) not in the original sparse matrix A. The Schur complement of A is denser than A was; there are new fill-in entries. The worst-case scenario for fill-in is the arrowhead matrix:

(7)   \begin{equation*} \underbrace{\begin{bmatrix} * & * & * & \cdots & * \\ * & * & & & \\ * & & * & & \\ \vdots & & & \ddots & \\ * & & &  & * \end{bmatrix}}_{=A} = \begin{bmatrix} 1 & & & & \\ * & 1& & &  \\ * & & 1 & & \\  \vdots & & & \ddots & \\ * & & & & 1 \end{bmatrix} \begin{bmatrix} * & * & * & * & * \\ & * & \bullet & \cdots & \bullet \\ & \bullet & * & \cdots & \bullet \\ & \vdots & \vdots & \ddots & \vdots \\ & \bullet & \bullet &  \cdots & * \end{bmatrix} \end{equation*}

After one step of Gaussian elimination, we went from a matrix with \mathcal{O}(N) nonzeros to a fully dense Schur complement! However, the arrowhead matrix also demonstrates a promising strategy. Simply construct a permutation matrix which reorders the first entry to be the last5For instance, the circular shift permutation P = \begin{bmatrix} & 1 & & & \\ & & 1 & & \\ & & & \ddots & \\ & & & & 1 \\ 1\end{bmatrix}. and then perform Gaussian elimination on the symmetrically permuted matrix PAP^\top instead. In fact, the entire LU factorization can be computed without fill-in:

(8)   \begin{equation*} \underbrace{\begin{bmatrix} * & & & & * \\ & * & & & * \\ & & * & & * \\  & & & \ddots  & \vdots \\ * & * & * & \cdots & * \end{bmatrix}}_{=PAP^\top} = \begin{bmatrix} 1 & & & & \\ & 1& & &  \\ & & 1 & & \\  & & & \ddots & \\ * & * & * & \cdots & 1 \end{bmatrix} \begin{bmatrix} * & &  & & * \\ & * &  & & * \\ & & * &  & * \\ &  & & \ddots & \vdots \\ &  &  &  & * \end{bmatrix}. \end{equation*}

This example shows the tremendous importance of reordering of the rows and columns when computing a sparse LU factorization.

The Best Reordering

As mentioned above, when computing an LU factorization of a dense matrix, one generally has to reorder the rows (and/or columns) of the matrix to compute the solution accurately. Thus, when computing the LU factorization of a sparse matrix, one has to balance the need to reorder for accuracy and to reorder to reduce fill-in. For these reasons, for the remainder of this post, we shall focus on computing Cholesky factorizations of SPD sparse matrices, where reordering for accuracy is not necessary.6For ill-conditioned and positive semi-definite matrices, one may want to reorder a Cholesky factorization so the result is rank-revealing. This review article has a good discussion of pivoted Cholesky factorization. For most applications, one can successfully compute an accurate Cholesky factorization without any specific accuracy-focused reordering strategy. Since we want the matrix to remain SPD, we must restrict ourselves to symmetric reordering strategies where A is reordered to PAP^\top where P is a permutation matrix.

Our question is deceptively simple: what reordering produces the least fill-in? In matrix language, what permutation P minimizes \operatorname{nnz}(L) where LDL^\top = PAP^\top is the Cholesky factorization of PAP^\top?

Note that, assuming no entries in the Gaussian elimination process exactly cancel, then the Cholesky factorization depends only on the sparsity pattern of A (the locations of the zeros and nonzeros) and not on the actual numeric values of A‘s entries. This sparsity structure is naturally represented by a graph \mathcal{G}(A) whose nodes are the indices \{1,\ldots,N\} with an edge between i \ne j if, and only if, a_{ij} \ne 0.

Now let’s see what happens when we do Gaussian elimination from a graph point-of-view. When we eliminate the (1,1) entry from matrix, this results in all nodes of the graph adjacent to 1 becoming connected to each other.7Graph theoretically, we add a clique containing the nodes adjacent to 1

This shows why the arrowhead example is so bad. By eliminating the a vertex connected to every node in the graph, the eliminated graph becomes a complete graph.

Reordering the matrix corresponds to choosing in what order the vertices of the graph are eliminated. Choosing the elimination order is then a puzzle game; eliminate all the vertices of the graph in the order that produces the fewest fill-in edges (shown red).8This “graph game” formulation of sparse Gaussian elimination is based on how I learned it from John Gilbert. His slides are an excellent resource for all things sparse matrices!

Finding the best elimination ordering for a sparse matrix (graph) is a good news/bad news situation. For the good news, many graphs possess a perfect elimination ordering, in which no fill-in is produced at all. There is a simple algorithm to determine whether a graph (sparse matrix) possesses a perfect elimination ordering and if so, what it is.9The algorithm is basically just a breadth-first search. Some important classes of graphs can be eliminated perfectly (for instance, trees). More generally, the class of all graphs which can be eliminated perfectly is precisely the set of chordal graphs, which are well-studied in graph theory.

Now for the bad news. The problem of finding the best elimination ordering (with the least fill-in) for a graph is NP-Hard. This means, assuming the widely conjectured result that {\rm P} \ne {\rm NP}, that finding the best elimination ordering would be a hard computational problem than the worst-case \mathcal{O}(N^3) complexity for doing Gaussian elimination in any ordering! One should not be too pessimistic about this result, however, since (assuming {\rm P} \ne {\rm NP}) all it says is that there exists no polynomial time algorithm guaranteed to produce the absolutely best possible elimination ordering when presented with any graph (sparse matrix). If one is willing to give up on any one of the bolded statements, further progress may be possible. For instance, there exists several good heuristics, which find reasonably good elimination orderings for graphs (sparse matrices) in linear \mathcal{O}(\operatorname{nnz}(A)) (or nearly linear) time.

Can Sparse Matrices be Eliminated in Linear Time?

Let us think about the best reordering question in a different way. So far, we have asked the question “Can we find the best ordering for a sparse matrix?” But another question is equally important: “How efficiently can we solve a sparse matrix, even with the best possible ordering?”

One might optimistically hope that every sparse matrix possesses an elimination ordering such that its Cholesky factorization can be computed in linear time (in the number of nonzeros), meaning that the amount of time needed to solve Ax = b is proportional to the amount of data needed to store the sparse matrix A.

When one tests a proposition like this, one should consider the extreme cases. If the matrix A is dense, then it requires \mathcal{O}(N^3) operations to do Gaussian elimination,10This is neglecting the possibility of acceleration by Strassentype fast matrix multiplication algorithms. For simplicity, we shall ignore these fast multiplication techniques for the remainder of this post and assume dense Ax = b can be solved no faster than \mathcal{O}(N^3) operations. but A only has \operatorname{nnz}(A) = N^2 nonzero entries. Thus, our proposition cannot hold in unmodified form.

An even more concerning counterexample is given by a matrix A whose graph \mathcal{G}(A) is a \mathcal{O}(\sqrt{N}) \times \mathcal{O}(\sqrt{N}) 2D grid graph.

Sparse matrices with this sparsity pattern (or related ones) appear all the time in discretized partial differential equations in two dimensions. Moreover, they are truly sparse, only having \operatorname{nnz}(A) = \mathcal{O}(N) nonzero entries. Unforunately, no linear time elimination ordering exists. We have the following theorem:

Theorem: For any elimination ordering for a sparse matrix A with \mathcal{G}(A) being a \sqrt{N}\times \sqrt{N} 2D grid graph, in any elimination ordering, the Cholesky factorization PAP^\top = LDL^\top requires \Omega(N^{3/2}) operations and satisfies \operatorname{nnz}(L)= \Omega(N\log N).11Big-Omega notation is a cousin of Big-O notation. One should read f(N) = \Omega(g(N)) as “f(N) is no less than a constant multiple of g(N), asymptotically”.

The proof is contained in Theorem 10 and 11 (and the ensuing paragraph) of classic paper by Lipton, Rose, and Tarjan. Natural generalizations to d-dimensional grid graphs give bounds of \Omega(N^{3(d-1)/d}) time and \operatorname{nnz}(L)=\Omega(N^{2(d-1)/d}) for d > 2. In particular, for 2D finite difference and finite element discretizations, sparse Cholesky factorization takes \Omega(N^{3/2}) operations and produces a Cholesky factor with \operatorname{nnz}(L)= \Omega(N\log N) in the best possible ordering. In 3D, sparse Cholesky factorization takes \Omega(N^{2}) operations and produces a Cholesky factor with \operatorname{nnz}(L)= \Omega(N^{4/3}) in the best possible ordering.

Fortunately, at least these complexity bounds are attainable: there is an ordering which produces a sparse Cholesky factorization with PAP^\top = LDL^\top requiring \Theta(N^{3/2}) operations and with \operatorname{nnz}(L)= \Theta(N\log N) nonzero entries in the Cholesky factor.12Big-Theta notation just means f(N) = \Theta(g(N)) if f(N) = \mathcal{O}(g(N)) and f(N) = \Omega(g(N)) One such asymptotically optimal ordering is the nested dissection ordering, one of the heuristics alluded to in the previous section. The nested dissection ordering proceeds as follows:

  1. Find a separator S consisting of a small number of vertices in the graph \mathcal{G}(A) such that when S is removed from \mathcal{G}(A), \mathcal{G}(A) is broken into a small number of edge-disjoint and roughly evenly sized pieces \mathcal{G}_1,\ldots,\mathcal{G}_k.13In particular, \mathcal{G}(A) is the disjoint union \mathcal{G}(A) = S \cup \mathcal{G}_1 \cup \cdots \mathcal{G}_k and there are no edges between \mathcal{G}_i and \mathcal{G}_j for i\ne j.
  2. Recursively use nested dissection to eliminate each component \mathcal{G}_1,\ldots, \mathcal{G}_k individually.
  3. Eliminate S in any order.

For example, for the 2D grid graph, if we choose S to be a cross through the center of the 2D grid graph, we have a separator of size |S| = \Theta(\sqrt{N}) dividing the graph into 4 roughly \sqrt{N}/2\times \sqrt{N}/2 pieces.

Let us give a brief analysis of this nested dissection ordering. First, consider the sparsity of the Cholesky factor \operatorname{nnz}(L). Let S(N) denote the number of nonzeros in L for an elimination of the \sqrt{N}\times \sqrt{N} 2D grid graph using the nested dissection ordering. Then step 2 of nested dissection requires us to recursively eliminate four \sqrt{N/4} \times \sqrt{N/4} 2D grid graphs. After doing this, for step 3, all of the vertices of the separator might be connected to each other, so the separator graph will potentially have as many as \mathcal{O}(|S|^2) = \mathcal{O}(N) edges, which result in nonzero entries in L. Thus, combining the fill-in from both steps, we get

(9)   \begin{equation*} S(N) = 4S\left(\frac{N}{4}\right) + \mathcal{O}(N). \end{equation*}

Solving this recurrence using the master theorem for recurrences gives \operatorname{nnz}(L) = S(N) = \mathcal{O}(N\log N). If one instead wants the time T(N) required to compute the Cholesky factorization, note that for step 3, in the worst case, all of the vertices of the separator might be connected to each other, leading to a \sqrt{N}\times \sqrt{N} dense matrix. Since a \sqrt{N}\times \sqrt{N} matrix requires \mathcal{O}((\sqrt{N})^3) = \mathcal{O}(N^{3/2}), we get the recurrence

(10)   \begin{equation*} T(N) = 4T\left(\frac{N}{4}\right) + \mathcal{O}(N^{3/2}), \end{equation*}

which solves to T(N) = \mathcal{O}(N^{3/2}).

Conclusions

As we’ve seen, sparse direct methods (as exemplified here by sparse Cholesky) possess fundamental scalability challenges for solving large problems. For the important class of 2D and 3D discretized partial differential equations, the time to solve Ax = b scales like \Theta(N^{3/2}) and \Theta(N^2), respectively. For truly large-scale problems, these limitations may be prohibitive for using such methods.

This really is the beginning of the story, not the end for sparse matrices however. The scalability challenges for classical sparse direct methods has spawned many exciting different approaches, each of which combats the scalability challenges of sparse direct methods for a different class of sparse matrices in a different way.

Upshot: Sparse matrices occur everywhere in applied mathematics, and many operations on them can be done very fast. However, the speed of computing an LU factorization of a sparse matrix depends significantly on the arrangement of its nonzero entries. Many sparse matrices can be factored quickly, but some require significant time to factor in any reordering.

Big Ideas in Applied Math: Smoothness and Degree of Approximation

At its core, computational mathematics is about representing the infinite by the finite. Even attempting to store a single arbitrary real number requires an infinite amount of memory to store its infinitely many potentially nonrepeating digits.1It is a well-known result that a real number has a repeating decimal expansion if, and only if, it is rational. When dealing with problems in calculus, however, the problem is even more severe as we want to compute with functions on the real line. In effect, a function is an uncountably long list of real numbers, one for each value of the function’s domain. We certainly cannot store an infinite list of numbers with infinitely many digits on a finite computer!

Thus, our only hope to compute with functions is to devise some sort of finite representation for them. The most natural representation is to describe function by a list of real numbers (and then store approximations of these numbers on our computer). For instance, we may approximate the function by a polynomial a_1 + a_2 x + \cdots + a_{M} x^{M-1} of degree M-1 and then store the M coefficients a_1,\ldots,a_{M}. From this list of numbers, we then can reconstitute an approximate version of the function. For instance, in our polynomial example, our approximate version of the function is just a_1 + a_2 x + \cdots + a_{M} x^{M-1}. Naturally, there is a tradeoff between the length of our list of numbers and how accurate our approximation is. This post is about that tradeoff.

The big picture idea is that the “smoother” a function is, the easier it will be to approximate it with a small number of parameters. Informally, we have the following rule of thumb: if a function f on a one-dimensional domain possesses s nice derivatives, then f can be approximated by a M-parameter approximation with error decaying at least as fast as 1/M^s. This basic result appears in many variants in approximation theory with different precise definitions of the term “s nice derivatives” and “M-parameter approximation”. Let us work out the details of the approximation problem in one concrete setting using Fourier series.

Approximation by Fourier Series

Consider a complex-valued and 2\pi-period function f defined on the real line. (Note that, by a standard transformation, there are close connections between approximation of 2\pi-periodic functions on the whole real line and functions defined on a compact interval [a,b].2Specifically, suppose that h is a function defined on [a,b]. Then define a function g on [-1,1] by g(x) = h((b-a)x + (b+a)). This linear rescaling of the domain is very simple and easy to understand. Now define a 2\pi-periodic function f on \mathbb{R} by f(\theta) = g(\cos(\theta)). There are very close connections between approximation of f and g. For example, Fourier cosine expansions of f are equivalent to Chebyshev polynomial expansions of g. For more on this subject, Trefethen’s Approximation Theory and Approximation Practice is an excellent reference.) If f is square-integrable, then f possesses a Fourier expansion

(1)   \begin{equation*} f(\theta) = \sum_{k=-\infty}^\infty} \hat{f}_k e^{{\rm i} k\theta}, \quad \hat{f}_k = \frac{1}{2\pi} \int_{-\pi}^\pi f(\theta) e^{-{\rm i}k\theta} \, d\theta. \end{equation*}

The infinite series converges in the L^2 sense, meaning that \lim_{m\to\infty} \|f - f_m\|_{L^2} = 0, where f_m is the truncated Fourier series f_M(\theta) = \sum_{k=-m}^m \hat{f}_k e^{{\rm i} k\theta} and \|\cdot\| is L^2 norm \|f\|_2 = \sqrt{\int_{-\pi}^\pi |f(\theta)|^2 \, d\theta.3One may show with considerable analysis that Fourier series converges in others senses, for example almost everywhere convergence. We also have the Plancherel theorem, which states that

(2)   \begin{equation*} \|f\|_{L^2}^2 = \int_{-\pi}^\pi |f(\theta)|^2\, d\theta = 2\pi \sum_{k=-\infty}^{\infty} |\hat{f}_k|^2. \end{equation*}

Note that the convergence of the Fourier series is fundamentally a statement about approximation. The fact that the Fourier series converges means that the truncated Fourier series f_m of 2m+1 terms acts as an arbitrarily close approximate of f (measured with the L^2 norm). However, the number M=2m+1 of terms we need to store might be quite large if a large value of M is needed for \|f - f_m\|_{L^2} to be small. However, as we shall soon see, if the function f is “smooth”, \|f - f_M\|_{L^2} will be small for even moderate values of M.

Smoothness

Often, in analysis, we refer to a function as smooth when it possesses derivatives of all orders. In one dimension, this means that the jth derivative f^{(j)} exists for every integer j \ge 0. In this post, we shall speak of smoothness as a more graded notion: loosely, a function g is smoother than a function f if g possesses more derivatives than f or the magnitude of g‘s derivatives are smaller. This conception of smoothness accords more with the plain-English definition of smoothness: the graph of a very mildly varying function g with a discontinuity in its 33rd derivative looks much smoother to the human eye than the graph of a highly oscillatory and jagged function f that nonetheless possesses derivatives of all orders.4One might refer to the precise concept of possessing derivatives of all orders as C^\infty smoothness.

For a function defined in terms of a Fourier series, it is natural to compute its derivative by formally differentiating the Fourier series term-by-term:

(3)   \begin{equation*} f'(\theta) = \sum_{k=-\infty}^\infty ({\rm i}k) \hat{f}_k e^{{\rm i} k\theta}. \end{equation*}

This formal Fourier series converges if, and only if, the L^2 norm of this putative derivative f', as computed with the Plancherel theorem, is finite: \|f'\|_{L^2}^2 = 2\pi \sum_{k=-\infty}^\infty |k|^2 |\hat{f}_k|^2 < \infty. This “derivative” f' may not be a derivative of f in the classical sense. For instance, using this definition, the absolute value function g(x) = |x| possesses a derivative g'(x) = +1 for x > 0 and g'(x) = -1 for x < 0. For the derivative of f to exist in the sense of Eq. (3), f need not be differentiable at every point, but it must define a square-integrable functions at the points where it is differentiable. We shall call the derivative f' as given by Eq. (3) to be a weak derivative of f.

If a function f has s square-integrable weak derivatives f^{(j)}(\theta) = \sum_{k=-\infty}^\infty ({\rm i}k)^j \hat{f}_k e^{{\rm i}k\theta} for 1\le j \le s, then we say that f belongs to the Sobolev space H^s. The Sobolev space H^s is equipped with the norm

(4)   \begin{equation*} \|f\|_{H^s} = \sqrt{\sum_{j=0}^s \|f^{(j)}\|_{L^2}^2}. \end{equation*}

The Sobolev norm \|\cdot\|_{H^s} is a quantitative measure of qualitative smoothness. The smaller the Sobolev norm H^s of f, the smaller the derivatives of f are. As we will see, we can use this to bound the approximation error.

Smoothness and Degree of Approximation

If f is approximated by f_m, the error of approximation is given by

(5)   \begin{equation*} f(\theta) - f_m(\theta) = \sum_{|k|>m} \hat{f}_k e^{{\rm i}k\theta}, \quad \|f - f_m\|_{L^2} = \sqrt{ \sum_{|k|>m} |\hat{f}_k|^2 }. \end{equation*}

Suppose that f \in H^s (that is, f has s square integrable derivatives). Then we may deduce the inequality

(6)   \begin{equation*} \begin{split} \|f - f_m\|_{L^2} &= \sqrt{ \sum_{|k|>m} |\hat{f}_k|^2 } \\ &= \sqrt{ \sum_{|k|>m} |k|^{-2s}|k|^{2s}|\hat{f}_k|^2 }\\ &\le m^{-s} \sqrt{ \sum_{|k|>m} |k|^{2s}|\hat{f}_k|^2 } \\ &\le m^{-s}\|f\|_{H^s}. \end{split} \end{equation*}

The first “\le” follows from the fact that |k|^{-2s} < m^{-2s} for |k|>m. This very important result is a precise instantiation of our rule of thumb from earlier: if f possesses s nice (i.e. square-integrable) derivatives, then the (L^2) approximation error for an M=2m+1-term Fourier approximation decays at least as fast as 1/M^s.

Higher Dimensions

The results for one dimension can easily be extended to consider functions f defined on d-dimensional space which are 2\pi-periodic in every argument.5e.g. f(\theta_1,\ldots, \theta_j+2\pi,\ldots,\theta_d) = f(\theta_1,\ldots,\theta_j,\ldots,\theta_d) for every 1\le j \le d For physics-based scientific simulation, we are often interested in d=2 or d=3, but for more modern problems in data science, we might be interested in very large dimensions d.

Letting \mathbb{Z}^d denote the set of all d-tuples of integers, one can show that one has the d-dimensional Fourier series

(7)   \begin{equation*} f(\theta) = \sum_{k \in \mathbb{Z}^d} f_{\hat k} e^{{\rm i}k\cdot \theta}, \quad \hat{f}_k = \frac{1}{(2\pi)^d} \int_{[-\pi,\pi]^d} f(\theta) e^{-{\rm i}k\cdot \theta} \, d\theta. \end{equation*}

Here, we denote k\cdot \theta to be the Euclidean inner product of the d-dimensional vectors k and \theta, k\cdot \theta = k_1\theta_1 + \cdots + k_d\theta_d. A natural generalization of the Plancherel theorem holds as well. Let \max |k| denote the maximum of |k_1|,\ldots,|k_d|. Then, we have the truncated Fourier series f_m = \sum_{\max |k| \le m} \hat{f}_k e^{{\rm i}k\cdot \theta}. Using the same calculations from the previous section, we deduce a very similar approximation property

(8)   \begin{equation*} \|f - f_m\|_{L^2} = \sqrt{ \sum_{\max|k|>m} |\hat{f}_k|^2 }  \le m^{-s}\|f\|_{H^s}. \end{equation*}

There’s a pretty big catch though. The approximate function f_m possesses M = (2m+1)^d terms! In order to include each of the first m Fourier modes in each of d dimensions, the number of terms M in our Fourier approximation must grow exponentially in d! In particular, the approximation error satisfies a bound

(9)   \begin{equation*} \|f - f_m\|_{L^2}  \le \left(\frac{M^{1/d}-1}{2}\right)^{-s}\|f\|_{H^s} \le C(s,d)M^{-s/d} \|f\|_{H^s}, \end{equation*}

where C(s,d) \ge 0 is a constant depending only on s and d.

This is the so-called curse of dimensionality: to approximate a function in d dimensions, we need exponentially many terms in the dimension d. In higher-dimensions, our rule of thumb needs to be modified: if a function f on a d-dimensional domain possesses s nice derivatives, then f can be approximated by a M-parameter approximation with error decaying at least as fast as 1/M^{s/d}.

The Theory of Nonlinear Widths: The Speed Limit of Approximation Theory

So far, we have shown that if one approximates a function f on a d-dimensional space by truncating its Fourier series to M terms, the approximation error decays at least as fast as 1/M^{s/d}. Can we do better than this, particularly in high-dimensions where the error decay can be very slow if s \ll d?

One must be careful about how one phrases this question. Suppose I ask “what is the best way of approximating a function f“? A subversive answer is that we may approximate f by a single-parameter approximation of the form \hat{f} = af with a=1! Consequently, there is a one-parameter approximation procedure that approximates every function perfectly. The problem with this one-parameter approximation is obvious: the one-parameter approximation is terrible at approximating most functions different than f. Thus, the question “what is the best way of approximating a particular function?” is ill-posed. We must instead ask the question “what is the best way of approximating an entire class of functions?” For us, the class of functions shall be those which are sufficiently smooth: specifically, we shall consider the class of functions whose Sobolev norm satisfies a bound \|f\|_{H^s} \le B. Call this class W.

As outlined at the beginning, an approximation procedure usually begins by taking the function f and writing down a list of M numbers a_1,\ldots,a_M. Then, from this list of numbers we reconstruct a function \hat{f} which serves as an approximation to f. Formally, this can be viewed as a mathematical function \Phi which takes f \in W to a tuple (a_1,\ldots,a_M) \in \mathbb{C}^d followed by a function \Lambda which takes (a_1,\ldots,a_M) and outputs a continuous 2\pi-periodic function \hat{f} \in C_{2\pi}(\mathbb{R}).

(10)   \begin{equation*} f \stackrel{\Phi}{\longmapsto}(a_1,\ldots,a_M)\stackrel{\Lambda}{\longmapsto} \hat{f}, \quad \Phi : W \to \mathbb{C}^M, \quad \Lambda : \mathbb{C}^M \to C_{2\pi}(\mathbb{R}). \end{equation*}

Remarkably, there is a mathematical theory which gives sharp bounds on the expressive power of any approximation procedure of this type. This theory of nonlinear widths serves as a sort of speed limit in approximation theory: no method of approximation can be any better than the theory of nonlinear widths says it can. The statement is somewhat technical, and we advise the reader to look up a precise statement of the result before using it any serious work. Roughly, the theory of nonlinear widths states that for any continuous approximation procedure6That is, an approximation procedure for which \Phi : W \to \mathbb{C}^d is a continuous function where W is equipped with the topology defined by the norm \|\cdot\|_{H^s}. that is able to approximate every function in W with L^2 approximation error no more than \epsilon, the number of parameters M must be at least some constant multiple of \epsilon^{-d/s}. Equivalently, the worst-case approximation error for a function in W with an M parameter continuous approximation is at least some multiple of 1/M^{s/d}.

In particular, the theory of nonlinear widths states that the approximation property of truncated Fourier series are as good as any method for approximating functions in the class W, as they exactly meet the “speed limit” given by the theory of nonlinear widths. Thus, approximating using truncated Fourier series is, in a certain very precise sense, as good as any other approximation technique you can think of in approximating arbitrary functions from W: splines, rational functions, wavelets, and artificial neural networks must follow the same speed limit. Make no mistake, these other methods have definite advantages, but degree of approximation for the class W is not one of them. Also, note that the theory of nonlinear widths shows that the curse of dimensionality is not merely an artifact of Fourier series; it affects all high-dimensional approximation techniques.

For the interested reader, see the following footnotes for two important ways one may perform approximations better than the theory of nonlinear widths within the scope of its rules.7The theory of nonlinear widths holds for continuous methods of approximation. This means that discontinuous approximation procedures may circumvent its bounds. Indeed, such discontinuous approximation procedures exist using probabilistic techniques. These methods are of questionable use in practice since discontinuous approximation procedures, by their nature, are extremely sensitive to the perturbations which are ubiquitous in performing computations on computers.8The theory of nonlinear widths holds for means of approximating the entire class W. More efficient methods may exist for meaningful subclasses of W. For instance, Mhaskar and Poggio show that for functions f satisfying a compositional property, that they can effectively be approximated by multilayer artificial neural networks.

Upshot: The smoother a function is, the better it can be approximated. Specifically, one can approximate a function on d dimensions with s nice derivatives with approximation error decaying with rate at least 1/M^{s/d}. In the case of 2\pi-periodic functions, such an approximation can easily be obtained by truncating the function’s Fourier series. This error decay rate is the best one can hope for to approximate all functions of this type.

Big Ideas in Applied Math: The Schur Complement

Given the diversity of applications of mathematics, the field of applied mathematics lacks a universally accepted set of core concepts which most experts would agree all self-proclaimed applied mathematicians should know. Further, much mathematical writing is very carefully written, and many important ideas can be obscured by precisely worded theorems or buried several steps into a long proof.

In this series of blog posts, I hope to share my personal experience with some techniques in applied mathematics which I’ve seen pop up many times. My goal is to isolate a single particularly interesting idea and provide a simple explanation of how it works and why it can be useful. In doing this, I hope to collect my own thoughts on these topics and write an introduction to these ideas of the sort I wish I had when I was first learning this material.

Given my fondness for linear algebra, I felt an appropriate first topic for this series would be the Schur Complement. Given matrices A, B, C, and D of sizes n\times n, n\times m, m\times n, and m\times m with A invertible, the Schur complement is defined to be the matrix D - CA^{-1}B.

The Schur complement naturally arises in block Gaussian elimination. In vanilla Gaussian elimination, one begins by using the (1,1)-entry of a matrix to “zero out” its column. Block Gaussian elimination extends this idea by using the n\times n submatrix occupying the top-left portion of a matrix to “zero out” all of the first n columns together. Formally, given the matrix \begin{bmatrix} A & B \\ C & D \end{bmatrix}, one can check by carrying out the multiplication that the following factorization holds:

(1)   \begin{equation*} \begin{bmatrix} A & B \\ C & D \end{bmatrix} = \begin{bmatrix} I_n & 0_{n\times m} \\ CA^{-1} & I_m\end{bmatrix} \begin{bmatrix} A & B \\ 0_{m\times n} & D - CA^{-1}B \end{bmatrix}. \end{equation*}

Here, we let I_j denote an identity matrix of size j\times j and 0_{j\times k} the j\times k zero matrix. Here, we use the notation of block (or partitioned) matrices where, in this case, a (m+n)\times (m+n) matrix is written out as a 2\times 2 “block” matrix whose entries themselves are matrices of the appropriate size that all matrices occurring in one block row (or column) have the same number of rows (or columns). Two block matrices which are blocked in a compatible way can be multiplied just like two regular matrices can be multiplied, taking care of the noncommutativity of matrix multiplication.

The Schur complement naturally in the expression for the inverse of \begin{bmatrix} A & B \\ C & D\end{bmatrix}. One can verify that for a block triangular matrix M = \begin{bmatrix} M_{11} & M_{12} \\ 0_{m\times n} & M_{22}\end{bmatrix}, we have the inverse formula

(2)   \begin{equation*} M^{-1} = \begin{bmatrix} M_{11} & M_{12} \\ 0_{m\times n} & M_{22}\end{bmatrix}^{-1} = \begin{bmatrix} M_{11}^{-1} & -M_{11}^{-1} M_{12}M_{22}^{-1} \\ 0_{m\times n} & M_{22}^{-1}\end{bmatrix}. \end{equation*}

(This can be verified by carrying out the block multiplication MM^{-1} for the proposed formula for M^{-1} and verifying that one obtains the identity matrix.) A similar formula holds for block lower triangular matrices. From here, we can deduce a formula for the inverse of \begin{bmatrix} A & B \\ C & D\end{bmatrix}. Let S = D - BA^{-1}C be the Schur complement. Then

(3)   \begin{equation*} \begin{split} \begin{bmatrix} A & B \\ C & D \end{bmatrix}^{-1} &= \begin{bmatrix} A & B \\ 0_{m\times n} & S \end{bmatrix}^{-1} \begin{bmatrix} I_n & 0_{n\times m} \\ CA^{-1} & I_m\end{bmatrix}^{-1} \\ &= \begin{bmatrix} A^{-1} & -A^{-1}BS^{-1} \\ 0_{m\times n} & S^{-1} \end{bmatrix} \begin{bmatrix} I_n & 0_{n\times m} \\ -CA^{-1} & I_m\end{bmatrix} \\ &= \begin{bmatrix} A^{-1} + A^{-1}BS^{-1}CA^{-1} & -A^{-1}BS^{-1} \\ -S^{-1}CA^{-1} & S^{-1} \end{bmatrix}. \end{split} \end{equation*}

This remarkable formula gives the inverse of \begin{bmatrix} A & B \\ C & D\end{bmatrix} in terms of A^{-1}, S^{-1}, B, and C. In particular, the (2,2)-block entry of \begin{bmatrix} A & B \\ C & D\end{bmatrix}^{-1} is simply just the inverse of the Schur complement.

Here, we have seen that if one starts with a large matrix and performs block Gaussian elimination, one ends up with a smaller matrix called the Schur complement whose inverse appears in inverse of the original matrix. Very often, however, it benefits us to run this trick in reverse: we begin with a small matrix, which we recognize to be the Schur complement of a larger matrix. In general, dealing with a larger matrix is more difficult than a smaller one, but very often this larger matrix will have special properties which allow us to more efficiently compute the inverse of the original matrix.

One beautiful application of this idea is the Sherman-Morrison-Woodbury matrix identity. Suppose we want to find the inverse of the matrix A - CD^{-1}B. Notice that this is the Schur complement of the matrix \begin{bmatrix} D & C \\ B & A \end{bmatrix}, which is the same \begin{bmatrix} A & B \\ C & D \end{bmatrix} after reordering.1Specifically, move the switch the first n rows with the last m rows and do the same with the columns. This defines a permutation matrix P = \begin{bmatrix} 0_{m\times n} & I_m \\ I_n & 0_{n\times m} \end{bmatrix} such that \begin{bmatrix} D & C \\ B & A \end{bmatrix} = P\begin{bmatrix} A & B \\ C & D \end{bmatrix}P^\top. Alternately, and perhaps more cleanly, one may define two Schur complements of the block matrix M = \begin{bmatrix} A & B \\ C & D \end{bmatrix}: one by “eliminating A“, M/A = D - CA^{-1}B, and the other by “eliminating D“, M/D = A - BD^{-1}C. Following the calculation in Eq. (3), just like the inverse of the Schur complement D - BA^{-1}C appears in the (2,2) entry of \begin{bmatrix} A & B \\ C & D \end{bmatrix}^{-1}, the inverse of the alternate Schur complement A - CD^{-1}B can be shown to appear in the (1,1) entry of \begin{bmatrix} A & B \\ C & D \end{bmatrix}^{-1}. Thus, comparing with Eq. (3), we deduce the Sherman-Morrison-Woodbury matrix identity:

(4)   \begin{equation*} (A - CD^{-1}B)^{-1} = A^{-1} + A^{-1}C(D-BA^{-1}C)^{-1}BA^{-1}. \end{equation*}

To see how this formula can be useful in practice, suppose that we have a fast way of solving the system linear equations Ax = b. Perhaps A is a simple matrix like a diagonal matrix or we have already pre-computed an LU factorization for A. Consider the problem of solving the rank-one updated problem (A+uv^\top)x = b. Using the Sherman-Morrison-Woodbury identity with C=u, D=-1, and B = v^\top, we have that

(5)   \begin{equation*} x = (A+uv^\top)^{-1}b = A^{-1}b + A^{-1}u (-1-v^\top A^{-1}u)^{-1}v^\top A^{-1}b, \end{equation*}

Careful observation of this formula shows how we can compute x (solving (A+uv^\top)x = b) by only solving two linear systems Ax_1 = b for x_1 = A^{-1}b and Ax_2 = u for x_2 = A^{-1}u.2Further economies can be saved if one has already previously computed x_1, which may be the case in many applications.

Here’s another variant of the same idea. Suppose we want solve the linear system of equation (D + uv^\top)x = b where D is a diagonal matrix. Then we can immediately write down the lifted system of linear equations

(6)   \begin{equation*} \underbrace{\begin{bmatrix} -1 & v^\top \\ u & D \end{bmatrix}}_{:=M}\begin{bmatrix} y \\ x \end{bmatrix} = \begin{bmatrix} 0 \\ b \end{bmatrix}. \end{equation*}

One can easily see that D+uv^\top is the Schur complement of the matrix M (with respect to the (1,1) block). This system of linear equations is sparse in the sense that most of its entries are zero and can be efficiently solved by sparse Gaussian elimination, for which there exists high quality software. Easy generalizations of this idea can be used to effectively solve many “sparse + low-rank” problems.

Another example of the power of the Schur complement are in least-squares problems. Consider the problem of minimizing \|Ax - b\|, where A is a matrix with full column rank and \|\cdot\| is the Euclidean norm of a vector \|x\|^2 = x^\top x. It is well known that the solution x satisfies the normal equations A^\top A x = A^\top b. However, if the matrix A is even moderately ill-conditioned, the matrix A^\top A will be much more ill-conditioned (the condition number will be squared), leading to a loss of accuracy. It is for this reason that it is preferable to solve the least-squares problem with QR factorization. However, if QR factorization isn’t available, we can use the Schur complement trick instead. Notice that A^\top A is the Schur complement of the matrix \begin{bmatrix} -I_{m} & A \\ A^\top & 0_{n\times n} \end{bmatrix}. Thus, we can solve the normal equations by instead solving the (potentially, see footnote) much better-conditioned system3More precisely, one should scale the identity in the (1,1) block of this system to be on the order of the size of the entries in A. The conditioning is sensitive to the scaling. If one selects a scale s = 1/\sqrt{2}\cdot \sigma_{\rm min}(A) to be proportional to the smallest singular value \sigma_{\rm min}(A) of A and constructs M = \begin{bmatrix} -sI_m & A \\ A^\top & 0_{n\times n}\end{bmatrix}, then one can show that the two-norm condition number of M no more than twice that of A. If one picks s to be roughly equal to the largest singular value \sigma_{\rm max}(A), then the two-norm condition number of M is roughly the square of the condition number of A. The accuracy of this approach may be less sensitive to the scaling parameter than this condition number analysis suggests.

(7)   \begin{equation*} \begin{bmatrix} -I_{m} & A \\ A^\top & 0_{n\times n} \end{bmatrix} \begin{bmatrix} r \\ x \end{bmatrix} = \begin{bmatrix} b \\ 0_n \end{bmatrix}. \end{equation*}

In addition to (often) being much better-conditioned, this system is also highly interpretable. Multiplying out the first block row gives the equation -r + Ax = b which simplifies to r = Ax - b. The unknown r is nothing but the least-squares residual. The second block row gives A^\top r = 0_n, which encodes the condition that the residual is orthogonal to the range of the matrix A. Thus, by lifting the normal equations to a large system of equations by means of the Schur complement trick, one derives an interpretable way of solving the least-squares problem by solving a linear system of equations, no QR factorization or ill-conditioned normal equations needed.

The Schur complement trick continues to have use in areas of more contemporary interest. For example, the Schur complement trick plays a central role in the theory of sequentially semiseparable matrices which is a precursor to many recent developments in rank-structured linear solvers. I have used the Schur complement trick myself several times in my work on graph-induced rank-structures.

Upshot: The Schur complement appears naturally when one does (block) Gaussian elimination on a matrix. One can also run this process in reverse: if one recognizes a matrix expression (involving a product of matrices potentially added to another matrix) as being a Schur complement of a larger matrix , one can often get considerable dividends by writing this larger matrix down. Examples include a proof of the Sherman-Morrison-Woodbury matrix identity (see Eqs. (3-4)), techniques for solving a low-rank update of a linear system of equations (see Eqs. (5-6)), and a stable way of solving least-squares problems without the need to use QR factorization (see Eq. (7)).

Additional resource: My classmate Chris Yeh has a great introduction to the Schur complements focusing more on positive semidefinite and rank-deficient matrices.


Edits: This blog post was edited to clarify the conditioning of the augmented linear system Eq. (7) and to include a reference to Chris’ post on Schur complements.

Book Review: Matrix Theory by Fuzhen Zhang

Linear algebra and matrix theory is a difficult subject to learn, with the landscape of textbooks balkanized into matrix-oriented introductory texts oriented towards first- and second-year engineering and science students, vector space-oriented intermediate texts geared towards one of the first proof-based mathematics courses for a mathematics student, and more advanced texts like Bhatia’s Matrix Analysis which are demanding and challenging mathematical journeys intended for graduate students and researchers. For the student interested in numerical analysis and scientific computation, there can be a huge gap from, say, Gilbert Strang’s Introduction to Linear Algebra or Sheldon Axler’s Linear Algebra Done Right to Rajendra Bhatia’s Matrix Analysis.

Matrix Theory: Basic Results and Techniques by Fuzhen Zhang beautifully fills this gap.

I discovered the book by chance and was pleasantly surprised to find a book packed with useful insights. The book’s subtitle, basic results and techniques, is appropriate, as the book places a large emphasis on developing the reader’s “matrix kung-fu” as the author describes it. In a particularly striking example of this early in the book, Zhang furnishes four proofs that AB and BA have the same nonzero eigenvalues. This is an act of pedagogical brilliance that more textbooks would be wise to replicate. By presenting multiple proofs, the author shows the reader that there are often many paths to the same result. At the same time, Zhang shows the reader the benefit of having a wide toolbox by showing how one of the four methods is unable to give any information about the multiplicity of the eigenvalues, while the others are. Zhang’s careful focus on techniques is careful and illuminating, demonstrating many different examples when the proof technique is useful as well as examples where the technique falls short in solving a problem.  Rather than merely teaching results, Matrix Theory teaches its reader tools with which to derive results.

One of the tools the book places emphasis on right from the start are block (or partitioned) matrices. To me, reasoning about matrices and linear operators by using 2\times 2 blocks is an absolutely indispensable tool, which I only learned after having taken three courses on linear algebra. Here, the emphasis is appropriately on block matrices throughout the book, beginning with the second chapter which is entirely dedicated to this useful approach.

The book has excellent coverage, stretching from introductory results through canonical forms to Kronecker and Hadamard products to sections on most important classes of matrices (e.g. nilpotent, tridiagonal, circulant, Vandermonde, Hadamard, doubly stochastic, nonnegative, unitary, contraction, positive (semi)definite, Hermitian, and normal matrices) to majorization and matrix inequalities. The book’s excellent exposition is complemented by an excellent suite of appropriately difficult exercises.

The book is a terrific reference on the tips and tricks of working with matrices and is excellent preparation for a more advanced monograph on matrix theory. I cannot endorse enough this underappreciated gem of a textbook.