Big Ideas in Applied Math: Low-rank Matrices

Let’s start our discussion of low-rank matrices with an application. Suppose that there are 1000 weather stations spread across the world, and we record the temperature during each of the 365 days in a year.1I borrow the idea for the weather example from Candes and Plan. If we were to store each of the temperature measurements individually, we would need to store 365,000 numbers. However, we have reasons to believe that significant compression is possible. Temperatures are correlated across space and time: If it’s hot in Arizona today, it’s likely it was warm in Utah yesterday.

If we are particularly bold, we might conjecture that the weather approximately experiences a sinusoidal variation over the course of the year:

(1)   \begin{equation*} \mbox{temperature at station $i$ on day $j$} \approx a_i + b_i \sin\left( 2\pi \times \frac{j}{365} + \phi \right). \end{equation*}

For a station i, a_i denotes the average temperature of the station and b_i denotes the maximum deviation above or below this station, signed so that it is warmer than average in the Northern hemisphere during June-August and colder-than-average in the Southern hemisphere during these months. The phase shift \phi is chosen so the hottest (or coldest) day in the year occurs at the appropriate time. This model is clearly grossly inexact: The weather does not satisfy a simple sinusoidal model. However, we might plausibly expect it to be fairly informative. Further, we have massively compressed our data, only needing to store the 2000 \ll 365,000 numbers a_1,a_2,\ldots,b_{1000} rather than our full data set of 365,000 temperature values.

Let us abstract this approximation procedure in a linear algebraic way. Let’s collect our weather data into a matrix W with 1000 rows, one for each station, and 365 columns, one for each day of the year. The entry W_{ij} corresponding to station i and day j is the temperature at station i on day j. The approximation Eq. (1) corresponds to the matrix approximation

(2)   \begin{equation*} W \approx \underbrace{\begin{bmatrix} a_1 + b_1 \sin\left( 2\pi \times \frac{1}{365} + \phi \right) & \cdots & a_1 + b_1 \sin\left( 2\pi \times \frac{365}{365} + \phi \right) \\ \vdots & \ddots & \vdots \\ a_{1000} + b_{1000} \sin\left( 2\pi \times \frac{1}{365} + \phi \right) & \cdots & a_{1000} + b_{1000} \sin\left( 2\pi \times \frac{365}{365} + \phi \right) \end{bmatrix}}_{:=\hat{W}}. \end{equation*}

Let us call the matrix on the right-hand side of Eq. (2) \hat{W} for ease of discussion. When presented in this linear algebraic form, it’s less obvious in what way \hat{W} is simpler than W, but we know from Eq. (1) and our previous discussion that \hat{W} is much more efficient to store than W. This leads us naturally to the following question: Linear algebraically, in what way is \hat{W} simpler than W?

The answer is that the matrix \hat{W} has low rank. The rank of the matrix \hat{W} is 2 whereas W almost certainly possesses the maximum possible rank of 365. This example is suggestive that low-rank approximation, where we approximate a general matrix by one of much lower rank, could be a powerful tool. But there any many questions about how to use this tool and how widely applicable it is. How can we compress a low-rank matrix? Can we use this compressed matrix in computations? How good of a low-rank approximation can we find? What even is the rank of a matrix?

What is Rank?

Let’s do a quick review of the foundations of linear algebra. At the core of linear algebra is the notion of a linear combination. A linear combination of vectors v_1,\ldots,v_k is a weighted sum of the form \alpha_1 v_1 + \cdots + \alpha_k v_k, where \alpha_1,\ldots,\alpha_k are scalars2In our case, matrices will be comprised of real numbers, making scalars real numbers as well.. A collection of vectors v_1,\ldots,v_k is linearly independent if there is no linear combination of them which produces the zero vector, except for the trivial 0-weighted linear combination 0 v_1 + \cdots + 0v_k. If v_1,\ldots,v_k are not linearly independent, then they’re linearly dependent.

The column rank of a matrix B is the size of the largest possible subset of B‘s columns which are linearly independent. So if the column rank of B is r, then there is some sub-collection of r columns of B which are linearly independent. There may be some different sub-collections of r columns from B that are linearly dependent, but every collection of r+1 columns is guaranteed to be linearly dependent. Similarly, the row rank is defined to be the maximum size of any linearly independent collection of rows taken from B. A remarkable and surprising fact is that the column rank and row rank are equal. Because of this, we refer to the column rank and row rank simply as the rank; we denote the rank of a matrix B by \operatorname{rank}(B).

Linear algebra is famous for its multiple equivalent ways of phrasing the same underlying concept, so let’s mention one more way of thinking about the rank. Define the column space of a matrix to consist of the set of all linear combinations of its columns. A basis for the column space is a linear independent collection of elements of the column space of the largest possible size. Every element of the column space can be written uniquely as a linear combination of the elements in a basis. The size of a basis for the column space is called the dimension of the column space. With these last definitions in place, we note that the rank of B is also equal to the dimension of the column space of B. Likewise, if we define the row space of B to consist of all linear combinations of B‘s rows, then the rank of B is equal to the dimension of B‘s row space.

The upshot is that if a matrix B has a small rank, its many columns (or rows) can be assembled as linear combinations from a much smaller collection of columns (or rows). It is this fact that allows a low-rank matrix to be compressed for algorithmically useful ends.

Rank Factorizations

Suppose we have an m\times n matrix B which is of rank r much smaller than both m and n. As we saw in the introduction, we expect that such a matrix can be compressed to be stored with many fewer than mn entries. How can this be done?

Let’s work backwards and start with the answer to this question and then see why it works. Here’s a fact: a matrix B of rank r can be factored as B = LR^\top, where L is an m\times r matrix and R is an n\times r matrix. In other words, B can be factored as a “thin” matrix L with r columns times a “fat” matrix R^\top with r rows. We use the symbols L and R for these factors to stand for “left” and “right”; we emphasize that L and R are general m\times r and n\times r matrices, not necessarily possessing any additional structure.3Readers familiar with numerical linear algebra may instinctively want to assume that L and R are lower and upper triangular; we do not make this assumption. The fact that we write the second term in this factorization as a transposed matrix “R^\top” is unimportant: We adopt a convention where we write a fat matrix as the transpose of a thin matrix. This notational choice is convenient allows us to easily distinguish between thin and fat matrices in formulas; this choice of notation is far from universal. We call a factorization such as B = LR^\top a rank factorization.4Other terms, such as full rank factorization or rank-revealing factorization, have been been used to describe the same concept. A warning is that the term “rank-revealing factorization” can also refer to a factorization which encodes a good low-rank approximation to B rather than a genuine factorization of B.

Rank factorizations are useful as we can compactly store B by storing its factors L and R. This reduces the storage requirements of B to (m+n)r numbers down from mn numbers. For example, if we store a rank factorization of the low-rank approximation \hat{W} from our weather example, we need only store 2,730 numbers rather than 365,000. In addition to compressing B, we shall soon see that one can rapidly perform many calculations from the rank factorization LR^\top = B without ever forming B itself. For these reasons, whenever performing computations with a low-rank matrix, your first step should almost always be to express it using a rank factorization. From there, most computations can be done faster and using less storage.

Having hopefully convinced ourselves of the usefulness of rank factorizations, let us now convince ourselves that every rank-r matrix B does indeed possess a rank factorization B = LR^\top where L and R have r columns. As we recalled in the previous section, since B has rank r, there is a basis of B‘s column space consisting of r vectors \ell_1,\ldots,\ell_r. Collect these r vectors as columns of an m\times r matrix L = \begin{bmatrix} \ell_1 & \cdots & \ell_r\end{bmatrix}. But since the columns of L comprise a basis of the column space of B, every column of B can be written as a linear combination of the columns of L. For example, the jth column b_j of B can be written as a linear combination b_j = R_{j1} \ell_1 + \cdots + R_{jr} \ell_r, where we suggestively use the labels R_{j1},\ldots,R_{jr} for the scalar multiples in our linear combination. Collecting these coefficients into a matrix R with ijth entry R_{ij}, we have constructed a factorization B = LR^\top. (Check this!)

This construction gives us a look at what a rank factorization is doing. The columns of L comprise a basis for the column space of B and the rows of R^\top comprise a basis for the row space of B. Once we fix a “column basis” L, the “row basis” R^\top is comprised of linear combination coefficients telling us how to assemble the columns of B as linear combinations of the columns in L.5It is worth noting here that a slightly more expansive definition of rank factorization has also proved useful. In the more general definition, a rank factorization is a factorization of the form B =LMR^\top where L is m\times r, M is r\times r, and R^\top is r\times n. With this definition, we can pick an arbitrary column basis L and row basis R^\top. Then, there exists a unique nonsingular “middle” matrix M such that B = LMR^\top. Note that this means there exist many different rank factorizations of a matrix since one may pick different column bases L for B.6This non-uniqueness means one should take care to compute a rank factorization which is as “nice” as possible (say, by making sure L and R are as well-conditioned as is possible). If one modifies a rank factorization during the course of an algorithm, one should take care to make sure that the rank factorization remains nice. (As an example of what can go wrong, “unbalancing” between the left and right factors in a rank factorization can lead to convergence problems for optimization problems.)

Now that we’ve convinced ourselves that every matrix indeed has a rank factorization, how do we compute them in practice? In fact, pretty much any matrix factorization will work. If you can think of a matrix factorization you’re familiar with (e.g., LU, QR, eigenvalue decomposition, singular value decomposition,…), you can almost certainly use it to compute a rank factorization. In addition, many dedicated methods have been developed for the specific purpose of computing rank factorizations which can have appealing properties which make them great for certain applications.

Let’s focus on one particular example of how a classic matrix factorization, the singular value decomposition, can be used to get a rank factorization. Recall that the singular value decomposition (SVD) of a (real) matrix B is a factorization B = U\Sigma V^\top where U and V are an m\times m and n\times n (real) orthogonal matrices and \Sigma is a (possibly rectangular) diagonal matrix with nonnegative, descending diagonal entries \sigma_1 \ge \sigma_2 \ge \cdots \ge \sigma_{\min(m,n)}. These diagonal entries are referred to as the singular values of the matrix B. From the definition of rank, we can see that the rank of a matrix B is equal to its number of nonzero singular values. With this observation in hand, a rank factorization of B can be obtained by letting L be the first r columns of U and R^\top being the first r rows of \Sigma V^\top (note that the remaining rows of \Sigma V^\top are zero).

Computing with Rank Factorizations

Now that we have a rank factorization in hand, what is it good for? A lot, in fact. We’ve already seen that one can store a low-rank matrix expressed as a rank factorization using only (m+n)r numbers, down from mn numbers by storing all of its entries. Similarly, if we want to compute the matrix-vector product Bx for a vector x of length n, we can compute this product as Bx = L(R^\top x). This reduces the operation count down from 2mn operations to 2(m+n)r operations using the rank factorization. As a general rule of thumb, when we have something expressed as a rank factorization, we can usually expect to reduce our operation count (and storage costs) from something proportional to mn (or worse) down to something proportional to m+n.

Let’s try something more complicated. Say we want to compute an SVD B = U\Sigma V^\top of B. In the previous section, we computed a rank factorization of B using an SVD, but suppose now we computed B = LR^\top in some other way. Our goal is to “upgrade” the general rank factorization B = LR^\top into an SVD of B. Computing the SVD of a general matrix B requires \mathcal{O}(mn\min(m,n)) operations (expressed in big O notation). Can we do better? Unfortunately, there’s a big roadblock for us: We need m^2+n^2 operations even to write down the matrices U and V, which already prevents us from achieving an operation count proportional to m+n like we’re hoping for. Fortunately, in most applications, only the first r columns of U and V are important. Thus, we can change our goal to compute a so-called economy SVD of B, which is a factorization B = \hat{U}\hat{\Sigma}\hat{V}^\top, where \hat{U} and \hat{V} are m\times r and n\times r matrices with orthonormal columns and \hat{\Sigma} is a r\times r diagonal matrix listing the nonzero singular values of B in decreasing order.

Let’s see how to upgrade a rank factorization B = LR^\top into an economy SVD B = \hat{U}\hat{\Sigma}\hat{V}^\top. Let’s break our procedure into steps:

  1. Compute (economy7The economy QR factorization of an m\times r thin matrix C is a factorization C=QR where Q is an m\times r matrix with orthonormal columns and R is a r\times r upper triangular matrix. The economy QR factorization is sometimes also called a thin or compact QR factorization, and can be computed in \mathcal{O}(mr^2) operations.) QR factorizations of L and R: L = Q_1T_1 and R = Q_2 T_2. Reader beware: We call the “R” factor in the QR factorizations of L and R to be T_1 and T_2, as we have already used the letter R to denote the second factor in our rank factorization.
  2. Compute the small matrix S = T_1T_2^\top.
  3. Compute an SVD of S=\tilde{U}\hat{\Sigma}\tilde{V}^\top.
  4. Set \hat{U} := Q_1\tilde{U} and \hat{V} := Q_2\tilde{V}.

By following the procedure line-by-line, one can check that indeed the matrices \hat{U} and \hat{V} have orthonormal columns and B = \hat{U}\hat{\Sigma}\hat{V}^\top, so this procedure indeed computes an economy SVD of B. Let’s see why this approach is also faster. Let’s count operations line-by-line:

  1. Economy QR factorization of an m\times r and n\times r matrix require \mathcal{O}(mr^2) and \mathcal{O}(nr^2) operations.
  2. The product of two r\times r matrices requires \mathcal{O}(r^3) operations.
  3. The SVD of an r\times r matrix requires \mathcal{O}(r^3) operations.
  4. The products of a m\times r and a n\times r matrix by r\times r matrices requires \mathcal{O}(mr^2) and \mathcal{O}(nr^2) operations.

Accounting for all the operations, we see the operation count is \mathcal{O}((m+n)r^2), a significant improvement over the \mathcal{O}(mn\min(m,n)) operations for a general matrix.8We can ignore the term of order \mathcal{O}(r^3) since r \le m,n so r^3 is \mathcal{O}((m+n)r^2)).

As the previous examples show, many (if not most) things we want to compute from a low-rank matrix B can be dramatically more efficiently computed using its rank factorization. The strategy is simple in principle, but can be subtle to execute: Whatever you do, avoid explicitly computing the product LR^\top at all costs. Instead, compute with the matrices L and R directly, only operating on m\times r, n\times r, and r\times r matrices.

Another important type of computation one can perform with low-rank matrices are low-rank updates, where we have already solved a problem for a matrix A and we want to re-solve it efficiently with the matrix A+B where B has low rank. If B is expressed in a rank factorization, very often we can do this efficiently as well, as we discuss in the following bonus section. As this is somewhat more niche, the uninterested reader should feel free to skip this and continue to the next section.

Low-rank Updates
Suppose we’ve solved a system of linear equations Ax = b by computing an LU factorization of the n\times n matrix A. We now wish to solve the system of linear equations (A+B)y = c, where B is a low-rank matrix expressed as a rank factorization B = LR^\top. Our goal is to do this without recomputing a new LU factorization from scratch. 

The first solution uses the Sherman-Morrison-Woodbury formula, which has a nice proof via the Schur complement and block Gaussian elimination which I described here. In our case, the formula yields

(3)   \begin{equation*} (A+B)^{-1} = (I_n-(A^{-1}L)(I_r+R^\top(A^{-1}L))^{-1}R^\top)A^{-1}, \end{equation*}

where I_n and I_r denote the n\times n and r\times r identity matrices. This formula can easily verified by multiplying with A+B and confirming one indeed recovers the identity matrix. This formula suggests the following approach to solving (A+B)y = c. First, use our already-computed LU factorization for A to compute S:=A^{-1}L. (This involves solving r linear systems of the form As = p to compute each column s of S from each column p of P.) We then compute an LU factorization of the much smaller r\times r matrix I_r+R^\top S. Finally, we use our LU factorization of A once more to compute z = A^{-1}c, from which our solution y = (A+B)^{-1}c is given by

(4)   \begin{equation*} y = (I_n-(A^{-1}L)(I_r+R^\top(A^{-1}L))^{-1}R^\top)A^{-1}c = z - S((I_r+R^\top S)^{-1}(R^\top z)). \end{equation*}

The net result is we solved our rank-r-updated linear system using r+1 solutions of the original linear system with no need to recompute any LU factorizations of n\times n matrices. We’ve reduced the solution of the system (A+B)y=c to an operation count of \mathcal{O}(n^2r) which is dramatically better than the \mathcal{O}(n^3) operation count of recomputing the LU factorization from scratch.

This simple example demonstrates a broader pattern: Usually if a matrix problem took \mathcal{O}(n^3) to solve originally, one can usually solve the problem after a rank-r update in an additional time of only something like \mathcal{O}(n^2r) operations.9Sometimes, this goal of \mathcal{O}(n^2r) can be overly optimistic. For symmetric eigenvalue problems, for instance, the operation count may be a bit larger by a (poly)logarithmic factor—say something like \mathcal{O}(n^2r\log n). An operation count like this still represents a dramatic improvement over the operation count \mathcal{O}(n^3) of recomputing by scratch. For instance, not only can we solve rank-r-updated linear systems in \mathcal{O}(n^2r) operations, but we can actually update the LU factorization itself in \mathcal{O}(n^2r) operations. Similar updates exist for Cholesky, QR, symmetric eigenvalue, and singular value decompositions to update these factorizations in \mathcal{O}(n^2r) operations.

An important caveat is that, as always with linear algebraic computations, it’s important to read the fine print. There are many algorithms for computing low-rank updates to different matrix factorizations with dramatically different accuracy properties. Just because in principle rank-updated versions of these factorizations can be computed doesn’t mean it’s always advisable. With this qualification stated, these ways of updating matrix computations with low-rank updates can be a powerful tool in practice and reinforce the computational benefits of low-rank matrices expressed via rank factorizations.

Low-rank Approximation

As we’ve seen, computing with low-rank matrices expressed as rank factorizations can yield significant computational savings. Unfortunately, many matrices in application are not low-rank. In fact, even if a matrix in an application is low-rank, the small rounding errors we incur in storing it on a computer may destroy the matrix’s low rank, increasing its rank to the maximum possible value of \min(m,n). The solution in this case is straightforward: approximate our high-rank matrix with a low-rank one, which we express in algorithmically useful form as a rank factorization.

Here’s one simple way of constructing low-rank approximations. Start with a matrix B and compute a singular value decomposition of B, B = U\Sigma V^\top. Recall from two sections previous that the rank of the matrix B is equal to its number of nonzero singular values. But what if B‘s singular values aren’t exactly zero, but they’re very small? It seems reasonable to expect that B is nearly low-rank in this case. Indeed, this intuition is true. To approximate B a low-rank matrix, we can truncate B‘s singular value decomposition by setting B‘s small singular values to zero. If we zero out all but the r largest singular values of B, this procedure results in a rank-r matrix \hat{B} which approximates B. If the singular values that we zeroed out were tiny, then \hat{B} will be very close to B and the low-rank approximation is accurate. This matrix \hat{B} is called an r-truncated singular value decomposition of B, and it is easy to represent it using a rank factorization once we have already computed an SVD of B.

It is important to remember that low-rank approximations are, just as the name says, approximations. Not every matrix is well-approximated by one of small rank. A matrix may be excellently approximated by a rank-100 matrix and horribly approximated by a rank-90 matrix. If an algorithm uses a low-rank approximation as a building block, then the approximation error (the difference between B and its low-rank approximation \hat{B}) and its propagations through further steps of the algorithm need to be analyzed and controlled along with other sources of error in the procedure.

Despite this caveat, low-rank approximations can be startlingly effective. Many matrices occurring in practice can be approximated to negligible error by a matrix with very modestly-sized rank. We shall return to this surprising ubiquity of approximately low-rank matrices at the end of the article.

We’ve seen one method for computing low-rank approximations, the truncated singular value decomposition. As we shall see in the next section, the truncated singular value decomposition produces excellent low-rank approximations, the best possible in a certain sense, in fact. As we mentioned above, almost every matrix factorization can be used to compute rank factorizations. Can these matrix factorizations also compute high quality low-rank approximations?

Let’s consider a specific example to see the underlying ideas. Say we want to compute a low-rank approximation to a matrix B by a QR factorization. To do this, we want to compute a QR factorization B = QR and then throw away all but the first r columns of Q and the first r rows of R. This will be a good approximation if the rows we discard from R are “small” compared to the rows of R we keep. Unfortunately, this is not always the case. As a worst case example, if the first r columns of B are zero, then the first r rows of R will definitely be zero and the low-rank approximation computed this way is worthless.

We need to modify something to give QR factorization a fighting chance for computing good low-rank approximations. The simplest way to do this is by using column pivoting, where we shuffle the columns of B around to bring columns of the largest size “to the front of the line” as we computing the QR factorization. QR factorization with column pivoting produces excellent low-rank approximations in a large number of cases, but it can still give poor-quality approximations for some special examples. For this reason, numerical analysts have developed so-called strong rank-revealing QR factorizations, such as the one developed by Gu and Eisenstat, which are guaranteed to compute quite good low-rank approximations for every matrix B. Similarly, there exists a strong rank-revealing LU factorizations which can compute good low-rank approximations using LU factorization.

The upshot is that most matrix factorizations you know and love can be used to compute good-quality low-rank approximations, possibly requiring extra tricks like row or column pivoting. But this simple summary, and the previous discussion, leaves open important questions: what do we mean by good-quality low-rank approximations? How good can a low-rank approximation be?

Best Low-rank Approximation

As we saw in the last section, one way to approximate a matrix by a lower rank matrix is by a truncated singular value decomposition. In fact, in some sense, this is the best way of approximating a matrix by one of lower rank. This fact is encapsulated in a theorem commonly referred to as the Eckart–Young theorem, though the essence of the result is originally due to Schmidt and the modern version of the result to Mirsky.10A nice history of the Eckart–Young theorem is provided in the book Matrix Perturbation Theory by Stewart and Sun.

But what do we mean by best approximation? One ingredient we need is a way of measuring how big the discrepancy between two matrices is. Let’s define a measure of the size of a matrix E which we will call E‘s norm, which we denote as \|E\|. If B is a matrix and \hat{B} is a low-rank approximation to it, then \hat{B} is a good approximation to B if the norm \|B-\hat{B}\| is small. There might be many different ways of measuring the size of the error, but we have to insist on a couple of properties on our norm \|\cdot\| for it to really define a sensible measure of size. For instance if the norm of a matrix E is \|E\| = 5, then the norm of 10E should be \|10E\| = 10\|E\| = 50. A list of the properties we require a norm to have are listed on the Wikipedia page for norms. We shall also insist on one more property for our norm: the norm should be unitarily invariant.11Note that every unitarily invariant norm is a special type of vector norm (called a symmetric gauge function) evaluated on the singular values of the matrix. What this means is the norm of a matrix E remains the same if it is multiplied on the left or right by an orthogonal matrix. This property is reasonable since multiplication by orthogonal matrices geometrically represents a rotation or reflection12This is not true in dimensions higher than 2, but it gives the right intuition that orthogonal matrices preserve distances. which preserves distances between points, so it makes sense that we should demand that the size of a matrix as measured by our norm does not change by such multiplications. Two important and popular matrix norms satisfy the unitarily invariant property: the Frobenius norm \| E\|_{\rm F} = \sum_{ij} |E_{ij}|^2 and the spectral (or operator 2-) norm \| E \|_{\rm op} = \sigma_{\rm max}(E), which measures the largest singular value.13Both the Frobenius and spectral norms are examples of an important subclass of unitarily invariant norms called Schatten norms. Another example of a Schatten norm, important in matrix completion, is the nuclear norm (sum of the singular values).

With this preliminary out of the way, the Eckart–Young theorem states that the truncated singular value decomposition of B truncated to rank r is the closest of all rank-r matrices B when distances are measured using any unitarily invariant norm \|\cdot\|. If we let B_r denote the r-truncated singular value decomposition of B, then the Eckart–Young theorem states that

(5)   \begin{equation*} \| B - B_r \| \le \|B - C\| \mbox{ for all matrices $C$ of rank $r$}. \end{equation*}

Less precisely, the r-truncated singular value decomposition is the best rank-r approximation to a matrix.

Let’s unpack the Eckart–Young theorem using the spectral and Frobenius norms. In this context, a brief calculation and the Eckart–Young theorem proves that for any rank-r matrix C, we have

(6)   \begin{equation*} \| B - C \|_{\rm op} \ge \sigma_{r+1},\quad \| B - C\|_{\rm F} \ge \sqrt{\sum_{j>r} \sigma_j^2}, \end{equation*}

where \sigma_1,\sigma_2,\ldots are the singular values of B. This bound is quite intuitive. The error in low-rank approximation will be “small” when we measure the error in the spectral norm when each singular value we zero out is “small”. When we measure error in the Frobenius norm, the error in low-rank approximation is “small” when all of the singular values we zero out are “small” in aggregate when squared and added together.

The Eckart–Young theorem shows that possessing a good low-rank approximation is equivalent to the singular values rapidly decaying.14At least when measured in unitarily invariant norms. A surprising result shows that even the identity matrix, whose singular values are all equal to one, has good low-rank approximations in the maximum entrywise absolute value norm; see, e.g., Theorem 1.0 in this article. If a matrix does not have nice singular value decay, no good low-rank approximation exists, computed by the r-truncated SVD or otherwise.

Why Are So Many Matrices (Approximately) Low-rank?

As we’ve seen, we can perform computations with low-rank matrices represented using rank factorizations much faster than general matrices. But all of this would be a moot point if low-rank matrices rarely occurred in practice. But in fact precisely the opposite is true: Approximately low-rank matrices occur all the time in practice.

Sometimes, exact low-rank matrices appear for algebraic reasons. For instance, when we perform one step Gaussian elimination to compute an LU factorization, the lower right portion of the eliminated matrix, the so-called Schur complement, is a rank-one update to the original matrix. In such cases, a rank-r matrix might appear in a computation when one performs r steps of some algebraic process: The appearance of low-rank matrices in such cases is unsurprising.

However, often, matrices appearing in applications are (approximately) low-rank for analytic reasons instead. Consider the weather example from the start again. One might reasonably model the temperature on Earth as a smooth function T(\cdot,\cdot) of position x and time t. If we then let x_i denote the position on Earth of station i and t_j the time representing the jth day of a given year, then the entries of the W matrix are given by W_{ij} = T(x_i,t_j). As discussed in my article on smoothness and degree of approximation, a smooth function function of one variable can be excellently approximated by, say, a polynomial of low degree. Analogously, a smooth function depending on two arguments, such as our function T(\cdot,\cdot), can be excellently be approximated by a separable expansion of rank r:

(7)   \begin{equation*} T(x,t) \approx \phi_1(x) \psi_1(t) + \cdots + \phi_r(x) \psi_r(t). \end{equation*}

Similar to functions of a single variable, the degree to which a function T(\cdot,\cdot) can to be approximated by a separable function of small rank depends on the degree smoothness of the function T(\cdot,\cdot). Assuming the function T(\cdot,\cdot) is quite smooth, then T(\cdot,\cdot) can be approximated has a separable expansion of small rank r. This leads immediately to a low-rank approximation to the matrix W given by the rank factorization

(8)   \begin{equation*} W = \begin{bmatrix} \phi_1(x_1) & \cdots & \phi_r(x_1) \\ \vdots & \ddots & \vdots \\ \phi_1(x_{1000}) & \cdots & \phi_r(x_{1000}) \end{bmatrix}\begin{bmatrix} \psi_1(t_1) & \cdots & \psi_r(t_1) \\ \vdots & \ddots & \vdots \\ \psi_1(t_{365}) & \cdots & \psi_r(t_{365}) \end{bmatrix}^\top. \end{equation*}

Thus, in the context of our weather example, we see that the data matrix can be expected to be low-rank under the reasonable-sounding assumption that the temperature depends smoothly on space and time.

What does this mean in general? Let’s speak informally. Suppose that the ijth entries of a matrix B are samples f(x_i,y_j) from a smooth function f(\cdot,\cdot) for points x_1,\ldots,x_m and y_1,\ldots,y_n. Then we can expect that B will be approximately low-rank. From a computational point of view, we don’t need to know a separable expansion for the function f(\cdot,\cdot) or even the form of the function f(\cdot,\cdot) itself: If the smooth function f(\cdot,\cdot) exists and B is sampled from it, then B is approximately low-rank and we can find a low-rank approximation for B using the truncated singular value decomposition.15Note here an important subtlety. A more technically precise version of what we’ve stated here is that: if f(\cdot,\cdot) depending on inputs x and y is sufficiently smooth for (x,y) in the product of compact regions \Omega_x and \Omega_y, then an m\times n matrix B_{ij} = f(x_i,y_j) with x_i \in \Omega_x and y_j \in \Omega_y will be low-rank in the sense that it can be approximated to accuracy \epsilon by a rank-r matrix where r grows slowly as m and n increase and \epsilon decreases. Note that, phrased this way, the low-rank property of B is asymptotic in the size m and n and the accuracy \epsilon. If f(\cdot,\cdot) is not smooth on the entirety of the domain \Omega_x\times \Omega_y or the size of the domains \Omega_x and \Omega_y grow with m and n, these asymptotic results may no longer hold. And if m and n are small enough or \epsilon is large enough, B may not be well approximated by a matrix of small rank. Only when there are enough rows and columns will meaningful savings from low-rank approximation be possible.

This “smooth function” explanation for the prevalence of low-rank matrices is the reason for the appearance of low-rank matrices in fast multipole method-type fast algorithms in computational physics and has been proposed16This article considers piecewise analytic functions rather than smooth functions; the principle is more-or-less the same. as a general explanation for the prevalence of low-rank matrices in data science.

(Another explanation for low-rank structure for highly structured matrices like Hankel, Toeplitz, and Cauchy matrices17Computations with these matrices can often also be accelerated with other approaches than low-rank structure; see my post on the fast Fourier transform for a discussion of fast Toeplitz matrix-vector products. which appear in control theory applications has a different explanation involving a certain Sylvester equation; see this lecture for a great explanation.)

Upshot: A matrix is low-rank if it has many fewer linearly independent columns than columns. Such matrices can be efficiently represented using rank-factorizations, which can be used to perform various computations rapidly. Many matrices appearing in applications which are not genuinely low-rank can be well-approximated by low-rank matrices; the best possible such approximation is given by the truncated singular value decomposition. The prevalence of low-rank matrices in diverse application areas can partially be explained by noting that matrices sampled from smooth functions are approximately low-rank.

Why Randomized Algorithms?

In this post, I want to answer a simple question: how can randomness help in solving a deterministic (non-random) problem?

Let’s start by defining some terminology. An algorithm is just a precisely defined procedure to solve a problem. For example, one algorithm to compute the integral of a function f on the interval [0,1] is to pick 100 equispaced points 0 = x_1 < \tfrac{1}{100} = x_2 < \tfrac{2}{100} = x_3 <\cdots<\tfrac{99}{100}=x_{100} on this interval and output the Riemann sum \tfrac{1}{n} \sum_{j=1}^{100} f(x_j). A randomized algorithm is simply an algorithm which uses, in some form, randomly chosen quantities. For instance, a randomized algorithm1This randomized algorithm is an example of the strategy of Monte Carlo integration, which has proved particularly effective at computing integrals of functions on high-dimensional spaces. for computing integrals would be to pick 100 random points X_1,\ldots,X_{100} uniformly distributed in the interval [0,1] and output the average \tfrac{1}{n} \sum_{j=1}^{100} f(X_j). To help to distinguish, an algorithm which does not use randomness is called a deterministic algorithm.

To address the premise implicit in our central question, there are problems where randomized algorithms provably outperform the best possible deterministic algorithms.2Examples abound in online decision making, distributed computation, linear algebra in the matrix-vector query model, and in simple computational models like single tape Turing machines. Additionally, even for problems for which randomized algorithms have not been formally proven to outperform deterministic ones, the best randomized algorithms we know for many problems dramatically outperform the best-known deterministic algorithms. For example, testing whether an n-digit number is prime takes roughly n^6 operations using the best-known deterministic algorithm and only roughly n^2 operations allowing randomness. The power of randomness extends beyond discrete problems typically considered by computer scientists to continuous problems of interest to computational mathematicians and scientists. For instance, the (asymptotically) fastest known algorithms to solve Laplacian linear system of equations use random sampling as a key ingredient. The workhorse optimization routines in machine learning are mostly variants of stochastic gradient descent, a randomized algorithm. To list off a few more, randomized algorithms have proven incredibly effective in solving eigenvalue and singular value problems, approximating the trace of a very large matrix, computing low-rank approximations, evaluating integrals, and simulating subgrid phenomena in fluid problems.

This may seem puzzling and even paradoxical. There is no randomness in, say, a system of linear equations, so why should introducing randomness help solve such a problem? It can seem very unintuitive that leaving a decision in an algorithm up to chance would be better than making an informed (and non-random) choice. Why do randomized algorithms work so well; what is the randomness actually buying us?

A partial answer to this question is that it can be very costly to choose the best possible option at runtime for an algorithm. Often, a random choice is “good enough”.

A Case Study: Quicksort

Consider the example of quicksort. Given a list of N integers to sort in increasing order, quicksort works by selecting an element to be the pivot and then divides the elements of the list into groups larger and smaller than the pivot. One then recurses on the two groups, ending up with a sorted list.

The best choice of pivot is the median of the list, so one might naturally think that one should use the median as the pivot for quicksort. The problem with this reasoning is that finding the median is time-consuming; even using the fastest possible median-finding algorithm,3There is an algorithm guaranteed to find the median in \mathcal{O}(N) time, which is asymptotically fast as this problem could possibly be solved since any median-finding algorithm must in general look at the entire list.quicksort with exact median pivot selection isn’t very quick. However, one can show quicksort with a pivot selected (uniformly) at random achieves the same \mathcal{O}(N\log N) expected runtime as quicksort with optimal pivot selection and is much faster in practice.4In fact, \mathcal{O}(N \log N) is the fastest possible runtime for any comparison sorting algorithm.

But perhaps we have given up on fast deterministic pivot selection in quicksort too soon. What if we just pick the first entry of the list as the pivot? This strategy usually works fairly well too, but it runs into an unfortunate shortfall: if one feeds in a sorted list, the first-element pivot selection is terrible, resulting in a \mathcal{O}(N^2) algorithm. If one feeds in a list in a random order, the first-element pivot selection has a \mathcal{O}(N \log N) expected runtime,5Another way of implementing randomized quicksort is to first randomize the list and then always pick the first entry as the pivot. This is fully equivalent to using quicksort with random pivot selection in the given ordering. Note that randomizing the order of the list before its sorted is still a randomized algorithm because, naturally, randomness is needed to randomize order. but for certain specific orderings (in this case, e.g., the sorted ordering) the runtime is a disappointing \mathcal{O}(N^2). The first-element pivot selection is particularly bad since its nemesis ordering is the common-in-practice already-sorted ordering. But other simple deterministic pivot selections are equally bad. If one selects, for instance, the pivot to be the entry in the position \lfloor N/2 \rfloor, then we can still come up with an ordering of the input list that makes the algorithm run in time \mathcal{O}(N^2). And because the algorithm is deterministic, it runs in \mathcal{O}(N^2) time every time with such-ordered inputs. If the lists we need to sort in our code just happen to be bad for our deterministic pivot selection strategy, quicksort will be slow every time.

We typically analyze algorithms based on their worst-case performance. And this can often be unfair to algorithms. Some algorithms work excellently in practice but are horribly slow on manufactured examples which never occur in “real life”.6The simplex algorithm is an example of algorithm which is often highly effective in practice, but can take a huge amount of time on some pathological cases. But average-case analysis of algorithms, where one measures the average performance of an algorithm over a distribution of inputs, can be equally misleading. If one were swayed by the average-case analysis of the previous section for quicksort with first-element pivot selection, one would say that this should be an effective pivot selection in practice. But sorting an already-sorted or nearly-sorted list occurs all the time in practice: how many times do programmers read in a sorted list from a data source and sort it again just to be safe? In real life, we don’t encounter random inputs to our algorithms: we encounter a very specific distribution of inputs specific to the application we use the algorithm in. An algorithm with excellent average-case runtime analysis is of little use to me if it is consistently and extremely slow every time I run it on my input data on the problem I’m working on. This discussion isn’t meant to disparage average-case analysis (indeed, very often the “random input” model is not too far off), but to explain why worst-case guarantees for algorithms can still be important.

To summarize, often we have a situation where we have an algorithm which makes some choice (e.g. pivot selection). A particular choice usually works well for a randomly chosen input, but can be stymied by certain specific inputs. However, we don’t get to pick which inputs we receive, and it is fully possible the user will always enter inputs which are bad for our algorithms. Here is where randomized algorithms come in. Since we are unable to randomize the input, we instead randomize the algorithm.

A randomized algorithm can be seen as a random selection from a collection of deterministic algorithms. Each individual deterministic algorithm may be confounded by an input, but most algorithms in the collection will do well on any given input. Thus, by picking a random algorithm from our collection, the probability of poor algorithmic performance is small. And if we run our randomized algorithm on an input and it happens to do poorly by chance, if we run it again it is unlikely to do so; there isn’t a specific input we can create which consistently confounds our randomized algorithm.

The Game Theory Framework

Let’s think about algorithm design as a game. The game has two players: you and an opponent. The game is played by solving a certain algorithmic problem, and the game has a score which quantifies how effectively the problem was solved. For example, the score might be the runtime of the algorithm, the amount of space required, or the error of a computed quantity. For simplicity of discussion, let’s use runtime as an example for this discussion. You have to pay your opponent a dollar for every second of runtime, so you want to have the runtime be as low as possible.7In this scenario, we consider algorithms which always produce the correct answer to the problem, and it is only a matter of how long it takes to do so. In the field of algorithms, randomized algorithms of this type are referred to as Las Vegas algorithms. Algorithms which can also give the wrong answer with some (hopefully small) probability are referred to as Monte Carlo algorithms.

Each player makes one move. Your move is to present an algorithm A which solves the problem. Your opponent’s move is to provide an input I to your algorithm. Your algorithm A is then run on your opponents input I and you pay your opponent a dollar for every second of runtime.

This setup casts algorithm design as a two-person, zero-sum game. It is a feature of such games that a player’s optimal strategy is often mixed, picking a move at random subject to some probability distribution over all possible moves.

To see why this is the case, let’s consider a classic and very simple game: rock paper scissors. If I use a deterministic strategy, then I am forced to pick one choice, say rock, and throw it every time. But then my savvy opponent could pick paper and be assured victory. By randomizing my strategy and selecting between rock, paper, and scissors (uniformly) at random, I improve my odds to winning a third of the time, losing a third of the time, and drawing a third of the time. Further, there is no way my opponent can improve their luck by adopting a different strategy. This strategy is referred to as minimax optimal: among all (mixed) strategies I could adopt, this strategy has the best performance over all strategies provided my opponent always counters my strategy with the best response strategy they can find.

Algorithm design is totally analogous. For the pivot selection problem, if I pick a the fixed strategy of choosing the first entry to be the pivot (analogous to always picking rock), then my opponent could always give me a sorted list (analogous to always picking paper) and I would always lose. My randomizing my pivot selection, my opponent could input the list in whatever order they choose and my pivot selection will always have the excellent (expected) \mathcal{O}(N\log N) runtime characteristic of quicksort. In fact, randomized pivot selection is the minimax optimal pivot selection strategy (assuming pivot selection is non-adaptive in that we don’t choose the pivot based on the values in the list).8This is not to say that quicksort with randomized pivot selection is necessarily the minimax optimal sorting algorithm, but to say that once we have fixed quicksort as our sorting algorithm, randomized pivot selection is the minimax optimal non-adaptive pivot selection strategy for quicksort.

How Much Does Randomness Buy?

Hopefully, now, the utility of randomness is less opaque. Randomization, in effect, allows an algorithm designer to trade algorithm which runs fast for most inputs all of the time for an algorithm which runs fast for all inputs most of the time. They do this by introducing randomized decision-making to hedge against particular bad inputs which could confound their algorithm.

Randomness, of course, is not a panacea. Randomness does not allow us to solve every algorithmic problem arbitrarily fast. But how can we quantify this? Are there algorithmic speed limits for computational problems for which no algorithm, randomized or not, can exceed?

The game theoretic framework for randomized algorithms can shed light on this question. Let us return to the framing of last section where you choose an algorithm A, your opponent chooses an input I, and you pay your opponent a dollar for every second of runtime. Since this cost depends on the algorithm and the input, we can denote the cost C(A,I).

Suppose you devise a randomized algorithm for this task, which can be interpreted as selecting an algorithm A from a probability distribution P over the class of all algorithms \mathcal{A} which solve the problem. (Less formally, we assign each possible algorithm a probability of picking it and pick one subject to these probabilities.) Once your opponent sees your randomized algorithm (equivalently, the distribution P), they can come up with the on-average slowest possible input I_{\rm worst} and present it to your algorithm.

But now let’s switch places to see things from your opponent. Suppose they choose their own strategy of randomly selecting an input I from among all inputs \mathcal{I} with some distribution Q. Once you see the distribution Q, you can come up with the best possible deterministic algorithm A_{\rm best} to counter their strategy.

The next part gets a bit algebraic. Suppose now we apply your randomized algorithm against your opponents strategy. Then, your randomized algorithm could only take longer on average than A_{\rm best} because, by construction, A_{\rm best} is the fastest possible algorithm against the input distribution Q. Symbolically, we have

(1)   \begin{equation*} \mathbb{E}_{I\sim Q} \left[ C(A_{\rm best},I) \right] \le \mathbb{E}_{A\sim P} \left[ \mathbb{E}_{I\sim Q} \left[ C(A, I) \right] \right]. \end{equation*}

Here, \mathbb{E}_{I\sim Q} and \mathbb{E}_{A\sim P} denote the average (expected) value of a random quantity when an input I is drawn randomly with distribution Q or an algorithm A is drawn randomly with distribution P. But we know that I_{\rm worst} is the slowest input for our randomized algorithm, so, on average, our randomized algorithm will take longer on worst-case input I_{\rm worst} then a random input from Q. In symbols,

(2)   \begin{equation*} \mathbb{E}_{A\sim P} \left[ \mathbb{E}_{I\sim Q} \left[ C(A, I) \right] \right] \le \mathbb{E}_{A\sim P} \left[ C(A,I_{\rm worst}) \right]. \end{equation*}

Combining Eqs. (1) and (2) gives Yao’s minimax principle9Note that Yao’s minimax principle is a consequence of von Neumann’s minimax theorem in game theory.:

(3)   \begin{equation*} \mathbb{E}_{I\sim Q} \left[ C(A_{\rm best},I) \right] \le \mathbb{E}_{A\sim P} \left[ C(A,I_{\rm worst}) \right]. \end{equation*}

In words, the average performance of any randomized algorithm on its worst-case input can be no better than the average performance of the best possible deterministic algorithm for a distribution of inputs.10In fact, there is a strengthening of Yao’s minimax principle. That Eqs. (1) and (2) are equalities when P and Q are taken to be the minimax optimal randomized algorithm and input distribution, respectively. Specifically, assuming the cost is a natural number and we restrict to a finite class of potential inputs, \max_{\mbox{input distributions } Q} \min_{\mbox{algorithms }A} \mathbb{E}_{I\sim Q} [C(A,I)] =\min_{\mbox{randomized algorithms }P} \max_{\mbox{inputs }I}\mathbb{E}_{A\sim P} \left[ C(A,I) \right]. This nontrivial fact is a consequence of the full power of the von Neumann’s minimax theorem, which itself is a consequence of strong duality in linear programming. My thanks go to Professor Leonard Schulman for teaching me this result.

This, in effect, allows the algorithm analyst seeking to prove “no randomized algorithm can do better than this” to trade randomness in the algorithm to randomness in the input in their analysis. This is a great benefit because randomness in an algorithm can be used in arbitrarily complicated ways, whereas random inputs can be much easier to understand. To prove any randomized algorithm takes at least cost c to solve a problem, the algorithm analyst can find a distribution Q on which every deterministic algorithm takes at least cost c. Note that Yao’s minimax principle is an analytical tool, not an algorithmic tool. Yao’s principle establishes speed limits on the best possible randomized algorithm: it does not imply that one can just use deterministic algorithms and assume or make the input to be random.

There is a fundamental question concerning the power of randomized algorithms not answered by Yao’s principle that is worth considering: how much better can randomized algorithms be than deterministic ones? Could infeasible problems for deterministic computations be solved tractably by randomized algorithms?

Questions such as these are considered in the field of computational complexity theory. In this context, one can think of a problem as tractably solvable if it can be solved in polynomial time—that is, in an amount of time proportional to a polynomial function of the size of the input. Very roughly, we call the class of all such problems to be \mathsf{P}. If one in addition allows randomness, we call this class of problems \mathsf{BPP}.11More precisely, \mathsf{BPP} is the class of problems solvable in polynomial time by a Monte Carlo randomized algorithm. The class of problems solvable in polynomial time by a Las Vegas randomized algorithm, which we’ve focused on for the duration of this article, is a possibly smaller class called \mathsf{ZPP}.

It has widely been conjectured that \mathsf{P} = \mathsf{BPP}: all problems that can be tractably solved with randomness can tractably be solved without. There is some convincing evidence for this belief. Thus, provided this conjecture turns out to be true, randomness can give us reductions in operation counts by a polynomial amount in the input size for problems already in \mathsf{P}, but they cannot efficiently solve \mathsf{NP}-hard computational problems like the traveling salesman problem.12Assuming the also quite believable conjecture that \mathsf{P}\ne\mathsf{NP}.

So let’s return to the question which is also this article’s title: why randomized algorithms? Because randomized algorithms are often faster. Why, intuitively, is this the case? Randomization can us to upgrade simple algorithms that are great for most inputs to randomized algorithms which are great most of the time for all inputs. How much can randomization buy? A randomized algorithm on its worst input can be no better than a deterministic algorithm on a worst-case distribution. Assuming a widely believed and theoretically supported, but not yet proven, conjecture, randomness can’t make intractable problems into tractable ones. Still, there is great empirical evidence that randomness can be an immensely effective algorithm design tool in practice, including computational math and science problems like trace estimation and solving Laplacian linear systems.

Minimal Rank Completions

I’m delighted to share that my first applied mathematics paper, Minimal Rank Completions for Overlapping Blocks, coauthored with Nithin Govindarajan and Shivkumar Chandrasekaran, has been accepted for publication in Linear Algebra and its Applications. (We also have a preprint on arXiv.) In this post, I wanted to share what our paper is about and share an alternate derivation which ended up being cut from the final paper.

Our paper is concerned with the following question: given a collection of matrices which overlap in a particular way, can we choose the region of simultaneous overlap so as to minimize the rank of each of the individual matrices? This is a multi-objective optimization problem, so a priori it is not clear it has a simultaneous solution. So it was of great surprise to us when we discovered that it does. In fact, we were able to find a tractable algorithm to find all solutions to this multi-objective problem.

Our enormous thanks go to the anonymous reviewer, who went above and beyond by discovering and sketching to us a dramatically shorter and more revealing proof1In particular, our original proof only produced some solutions to this problem, where the reviewer’s improved proof strategy allowed us to characterize all solutions. of our main theorem. We are truly indebted to this extraordinary reviewer for the significantly streamlined and more understandable presentation in the current iteration of our paper.

For the remainder of this post, I want to discuss minimal rank completions in a little more generality and present a solution to a special case of a minimal rank completion problem which can be tractably solved. I will present a solution to this problem we discovered which I have not seen elsewhere in the literature. I hope that this perspective will be complementary to existing solutions to this problem published in the literature including the summary of this result in our paper.

Minimal Rank Completion Problems

Minimal rank completions have achieved a lot of buzz in statistics and data science. Given a matrix with unknown entries, it is often very natural to impute the missing entries as those minimizing the rank of the matrix. This is justified by a kind of Occam’s razor argument: big data matrices are very often (approximately) low-rank, so when data is missing, we may assume it takes whatever values are necessary to minimize the rank.

Finding the rank-minimizing choice of missing entries is, in general, NP-hard. However, under certain assumptions, semidefinite programming relaxations can sometimes exactly recover the missing entries.

Alternately, if the missing entries belong to a special pattern, it may be possible to find the rank-minimizing choice of the unknown entries using linear algebraic tools like matrix factorizations. These approaches have significantly more limited domains of applicability than optimization-based approaches, but have advantages in that they

  • can be more tractable as they involve matrix factorizations rather than solving optimization problems;
  • work over arbitrary fields of numbers, such as finite fields appearing in computer science;
  • can find all ways of choosing the entries to minimize the rank, in particular certifying or dis-certifying the uniqueness of the rank-minimizing choice; and
  • are guaranteed to find the rank-minimizer, without any statistical assumptions.

These algebraic methods have their roots in systems theory, integral operators, and rank-structured matrices. The last of these was the application which motivated our interest in the subject.

Our paper concerns an overlapping variant of this problem, where we simultaneously minimize the ranks of several matrices by choosing the entries in the overlap carefully. This problem emerged to us naturally in the construction of certain matrix representations, and we hope it might prove useful in tackling other problems. As it turns out, the overlapping problem can be solved very much in the spirit of a much simpler problem, the block 2\times 2 minimal rank completion problem, which we will spend most of the remainder of this post discussing.

A Solution to the Block 2×2 Case

The block 2\times 2 minimal rank completion problem is as follows: given a partially filled block matrix

(1)   \begin{equation*} \begin{bmatrix} A & B \\ ? & C \end{bmatrix} \end{equation*}

how can the “?” be filled in to minimize the rank of this matrix?

A generalized version of this problem was originally solved by Kaashoek and Woerdeman. An alternate derivation using matrix factorizations is given by Eidelman, Gohberg, and Haimovici, though they only find some of the solutions to this problem. We seek to characterize all ways of choosing the “?” so that the rank is minimize the rank.

Here, I present the solution to this problem which my coauthors and I originally discovered, which is different than the one we summarize in the final version of our paper.2This construction does appear in Section 4 of an initial preprint of ours on rank-structured matrices. This solution is in the spirit of the one by Eidelman, Gohberg, and Haimovici but does produce all solutions.

Let’s start by recalling some facts about linear algebra. Given an m\times n real3The analysis in this discussion holds over any field, but we shall use the real numbers for concreteness. matrix K, one can define its column space, which I will denote \operatorname{Col} K, to be the vector space consist of all linear combinations of the columns of K. From this column space, or indeed any vector subspace of \mathbb{R}^m, one can extract a basis for this subspace, meaning every vector in the subspace can be uniquely decomposed as a linear combination of elements from the basis. In this discussion, we shall always arrange the basis vectors as columns of a basis matrix P. If we instead consider the row space \operatorname{Row} K of K, then we arrange the elements of a basis as rows of a matrix Q.

Before we solve the problem, let’s reason about the lowest possible rank we could possibly expect for the completed matrix. Since the rank of a matrix can be no smaller than the rank of any of its submatrices, clearly we must have that, for any assignment X of the “?”,

(2)   \begin{equation*} \operatorname{rank} \begin{bmatrix} A & B \\ X & C \end{bmatrix} \ge \operatorname{rank} \begin{bmatrix} A & B\end{bmatrix}. \end{equation*}

However, in general, the rank of the completed matrix must be higher than \operatorname{rank} \begin{bmatrix} A & B\end{bmatrix}, as is exemplified when A and B are both zero but C is not. In addition to rank of \begin{bmatrix} A & B\end{bmatrix}, we must also account for rows of \begin{bmatrix} X & C\end{bmatrix} which cannot be written as linear combinations of the rows above, no matter how X is chosen. With a little noodling, one can convince oneself that there are always at least \operatorname{rank} \begin{bmatrix} B \\ C \end{bmatrix} - \operatorname{rank} B such rows, leading to the bound

(3)   \begin{equation*} \operatorname{rank} \begin{bmatrix} A & B \\ X & C \end{bmatrix} \ge \operatorname{rank} \begin{bmatrix} A & B\end{bmatrix} + \operatorname{rank} \begin{bmatrix} B \\ C \end{bmatrix} - \operatorname{rank} B =: r_{\rm opt}. \end{equation*}

We shall show that, by judicious choice of X, Eq. (3) can always be achieved with equality, making r_{\rm opt} the minimal rank for this completion problem.

Let us now find such a rank-minimizing X. The construction begins by considering the column spaces \operatorname{Col} A and \operatorname{Col} B of the matrices A and B. The intersection of these column spaces \operatorname{Col} A \cap \operatorname{Col} B is again a vector subspace, and we choose a basis P_{AB} for it. The subspace \operatorname{Col} A \cap \operatorname{Col} B might have a smaller size than \operatorname{Col} A. Therefore, to find a basis for \operatorname{Col} A, we can extend the basis P_{AB} by adding new columns P_{\overline{A}} to it so that the enlarged matrix \begin{bmatrix} P_{AB} & P_{\overline{A}} \end{bmatrix} forms a basis for \operatorname{Col} A. Similarly, we can find new columns P_{\overline{B}} to add to P_{AB} so that the matrix \begin{bmatrix} P_{AB} & P_{\overline{B}} \end{bmatrix} forms a basis for \operatorname{Col} B.

Now comes something subtle. Since P = \begin{bmatrix} P_{AB} & P_{\overline{A}} \end{bmatrix} forms a basis for \operatorname{Col} A, every column in A can be written as a linear combination of columns in P. In matrix language, this means there exists a matrix Q^\top such that A = PQ^\top—in fact, this exact Q forms a basis for \operatorname{Row} A. Since P is divided into two collections of columns, it is only natural to similarly (and conformally) divide Q into two pieces as Q = \begin{bmatrix} Q_{AB,A} & Q_{\overline{A},A} \end{bmatrix}. Thus, doing the same with B as we did with A, we obtain two matrix factorizations

(4)   \begin{equation*} A = \begin{bmatrix} P_{AB} & P_{\overline{A}} \end{bmatrix} \begin{bmatrix} Q_{AB,A}^\top \\ Q_{\overline{A},A}^\top \end{bmatrix}, \quad B = \begin{bmatrix} P_{AB} & P_{\overline{B}} \end{bmatrix} \begin{bmatrix} Q_{AB,B}^\top \\ Q_{\overline{B},B}^\top \end{bmatrix}. \end{equation*}

Now we do the same song and dance with the row spaces of B and C. Let’s go through the construction somewhat quickly. We start with a basis Q_{BC} for the intersection of \operatorname{Row} B \cap \operatorname{Row} C. We then extend these to bases \begin{bmatrix} Q_{BC} & Q_{\overline{B}} \end{bmatrix} and \begin{bmatrix} Q_{BC} & Q_{\overline{C}} \end{bmatrix} for \operatorname{Row} B and \operatorname{Row} C. From here, we derive the existence of matrix factorizations analogous to Eq. (4):

(5)   \begin{equation*} B = \begin{bmatrix} P_{B,BC} & P_{B,\overline{B}} \end{bmatrix} \begin{bmatrix} Q_{BC}^\top \\ Q_{\overline{B}}^\top \end{bmatrix},\quad C = \begin{bmatrix} P_{C,BC} & P_{C,\overline{C}} \end{bmatrix} \begin{bmatrix} Q_{BC}^\top \\ Q_{\overline{C}}^\top \end{bmatrix}. \end{equation*}

For the next part, we shall take advantage of a powerful fact: if I have a bases P and Q for the row and column spaces of a matrix K, there exists a nonsingular matrix R for which K=PRQ^\top. Applying this fact to the bases \begin{bmatrix} P_{B,BC} & P_{B,\overline{B}} \end{bmatrix} and \begin{bmatrix} Q_{AB,B} & Q_{\overline{B},B} \end{bmatrix} for B‘s column and row spaces, we get the existence of a matrix R = \begin{bmatrix} R_{BC,AB} & R_{\overline{B},AB} \\ R_{BC,\overline{B}} & R_{\overline{B},\overline{B}} \end{bmatrix} such that

(6)   \begin{equation*} B = \begin{bmatrix} P_{B,BC} & P_{B,\overline{B}} \end{bmatrix}\begin{bmatrix} R_{BC,AB} & R_{BC,\overline{B}} \\ R_{\overline{B},AB} & R_{\overline{B},\overline{B}} \end{bmatrix} \begin{bmatrix} Q_{AB,B}^\top \\ Q_{\overline{B},B}^\top \end{bmatrix}. \end{equation*}

We now have all ingredients to describe the solution. Rather than trying to “derive” the solution in a rigorous way, let us try and discover the solution in a non-rigorous way and justify our work later. We’re hoping to find assignments X of the “?” such that \begin{bmatrix} A & B \\ X & C \end{bmatrix} achieves the minimum possible rank r_{\rm opt} defined in Eq. (3). To do so, let’s try and find a rank factorization of the completed matrix and then infer what values X will take. First off, let’s build a rank factorization for \begin{bmatrix} A & B \end{bmatrix} using the building blocks we’ve already built

(7)   \begin{equation*} \begin{bmatrix} A & B \end{bmatrix} = \begin{bmatrix} P_{AB} & P_{\overline{B}} & P_{\overline{A} \end{bmatrix} \begin{bmatrix} Q_{AB,A}^\top & Q_{AB,B}^\top \\ 0 & Q_{\overline{B},B}^\top \\ Q_{\overline{A},A}^\top & 0 \end{bmatrix}. \end{equation*}

Now we want to extend this to a rank factorization for the entire completed matrix. Let’s build up to this in stages, denoting by \star entries whose value we have yet to determine. For us to have a hope of representing the matrix C, we’ll need to somehow add Q^\top_{\overline{C}} into our rank factorization. Doing exactly this, we get the partially specified rank factorization

(8)   \begin{equation*} \begin{bmatrix} A & B \\ X & C \end{bmatrix} = \begin{bmatrix} P_{AB} & P_{\overline{B}} & P_{\overline{A} & 0 \\ \star & \star & \star & \star \end{bmatrix} \begin{bmatrix} Q_{AB,A}^\top & Q_{AB,B}^\top \\ 0 & Q_{\overline{B},B}^\top \\ Q_{\overline{A},A}^\top & 0\\ \star & Q_{\overline{C}}^\top \end{bmatrix}. \end{equation*}

Now, to fill in the second block row of the left factor, we recall that C = P_{C,BC} Q_{BC}^\top + P_{C,\overline{C}} Q_{\overline{C}}^\top. From Eqs. (5) and (6), we deduce that Q_{BC}^\top = R_{BC,AB} Q_{AB,B}^\top + R_{BC,\overline{B}} Q_{\overline{B},B}^\top. Thus, we can fill in more entries:

(9)   \begin{equation*} \begin{bmatrix} A & B \\ X & C \end{bmatrix} = \begin{bmatrix} P_{AB} & P_{\overline{B}} & P_{\overline{A} & 0 \\ P_{C,BC}R_{BC,AB} & P_{C,BC} R_{BC,\overline{B}} & \star & P_{C,\overline{C}} \end{bmatrix} \begin{bmatrix} Q_{AB,A}^\top & Q_{AB,B}^\top \\ 0 & Q_{\overline{B},B}^\top \\ Q_{\overline{A},A}^\top & 0\\ \star & Q_{\overline{C}}^\top \end{bmatrix}. \end{equation*}

With these entries filled in, the remaining \star‘s can be chosen arbitrarily. Assigning names F_{\overline{A}}^\top and F_{\overline{C}} to these free variables, we conclude that

(10)   \begin{equation*} \begin{bmatrix} A & B \\ X & C \end{bmatrix} = \begin{bmatrix} P_{AB} & P_{\overline{B}} & P_{\overline{A} & 0 \\ P_{C,BC}R_{BC,AB} & P_{C,BC} R_{BC,\overline{B}} & F_{\overline{C}} & P_{C,\overline{C}} \end{bmatrix} \begin{bmatrix} Q_{AB,A}^\top & Q_{AB,B}^\top \\ 0 & Q_{\overline{B},B}^\top \\ Q_{\overline{A},A}^\top & 0\\ F_{\overline{A}}^\top & Q_{\overline{C}}^\top \end{bmatrix} \end{equation*}

and

(11)   \begin{equation*} X = P_{C,BC}R_{BC,AB}Q_{AB,A}^\top + F_{\overline{A}} Q_{\overline{A},A}^\top + P_{C,\overline{C}} F_{\overline{C}}^\top. \end{equation*}

From all the analysis previous, we know that all X‘s of the form Eq. (11) solve the minimal rank completion problem, making the completed matrix achieve the minimal rank r_{\rm opt} defined in Eq. (3). With a little more elbow grease, one can also confirm that all such minimal completions X are of the form Eq. (11), proving that Eq. (11) indeed characterizes all solutions to the minimal rank completion problem. This completes the characterization and construction of the complete set of minimizers to the block 2\times 2 minimal rank completion problem.

The Overlapping Block Minimal Rank Completion Problem

If you found this post interesting, be sure to check out our paper (or here on arXiv) for an explanation of a different way of thinking about the solution of the block 2\times 2 minimal rank completion problem and a solution to a more general “overlapping” variant. The treatment should be reasonably self-contained, and we hope the solution to this problem could prove a useful tool in tackling open problems in systems theory and the study of rank-structured matrices.

Big Ideas in Applied Math: The Fast Fourier Transform

The famous law of the instrument states that “when all you have is a hammer, every problem looks like a nail.” In general, this tendency is undesirable: most problems in life are not nails and could better be addressed by a more appropriate tool. However, one can also review the law of the instrument in a more positive framing: when presented with a powerful new tool, it is worth checking how many problems it can solve. The fast Fourier transform (FFT) is one of the most important hammers in an applied mathematician’s toolkit. And it has made many seemingly unrelated problems look like nails.

In this article, I want to consider three related questions:

  1. What is the FFT—what problem is it solving and how does it solve it fast?
  2. How can the ideas behind the FFT be used to solve other problems?
  3. How can the FFT be used as a building block in solving a seemingly unrelated problem?

The FFT is widely considered one of the most important numerical algorithms, and as such every sub-community of applied mathematics is inclined to see the most interesting applications of the FFT as those in their particular area. I am unapologetically victim to this tendency myself, and thus will discuss an application of the FFT that I find particularly beautiful and surprising. In particular, this article won’t focus on the manifold applications of the FFT in signal processing, which I think has been far better covered by authors more familiar with that field.

The Discrete Fourier Transform

At its core, the FFT is a fast algorithm to compute n complex numbers \hat{f}_0,\ldots,\hat{f}_{n-1} given n real or complex numbers f_0,\ldots,f_{n-1} defined by the formula1The factor of \frac{1}{\sqrt{n}} is not universal. It is common to omit the factor in (1) and replace the \tfrac{1}{\sqrt{n}} in Eq. (2) with a \tfrac{1}{n}. We prefer this convention as it makes the DFT a unitary transformation. When working with Fourier analysis, it is important to choose formulas for the (discrete) Fourier transform and the inverse (discrete) Fourier transform which form a pair in the sense they are inverses of each other.

(1)   \begin{equation*} \hat{f}_k = \frac{1}{\sqrt{n}} \sum_{j=0}^{n-1} f_j e^{-(2\pi i/n)jk} \quad \mbox{for } k = 0,1,2,\ldots,n-1. \end{equation*}

The outputs \hat{f} = (\hat{f}_0,\ldots,\hat{f}_{n-1}) is called the discrete Fourier transform (DFT) of f = (f_0,\ldots,f_{n-1}). The FFT is just one possible algorithm to evaluate the DFT.

The DFT has the following interpretation. Suppose that f is a periodic function defined on the integers with period n—that is, f(j + n) = f(j) for every integer j. Choose f_0,\ldots,f_{n-1} to be the values of f given by f_j = f(j) for j=0,1,2,\ldots,n-1. Then, in fact, \hat{f}_0,\ldots,\hat{f}_{n-1} gives an expression for f as a so-called trigonometric polynomial2The name “trigonometric polynomial” is motivated by Euler’s formula which shows that e^{(2\pi i/n)jk} = (\cos (\tfrac{2\pi i}{n} j) + i\sin(\tfrac{2\pi i}{n} j))^k, so indeed the right-hand side of Eq. (2) is indeed a “polynomial” in the “variables” \sin(\tfrac{2\pi i}{n} j) and \cos(\tfrac{2\pi i}{n} j):

(2)   \begin{equation*} f(j) = \frac{1}{\sqrt{n}} \sum_{k=0}^{n-1} \hat{f}_k e^{(2\pi i/n)jk} \mbox{ for every integer } j. \end{equation*}

This shows that (1) converts function values f_0,f_1,\ldots,f_{n-1} of a periodic function f to coefficients \hat{f}_0,\hat{f}_1,\ldots,\hat{f}_{n-1} of a trigonometric polynomial representation of f, which can be called the Fourier series of f. Eq. (2), referred to as the inverse discrete Fourier transform, inverts this, converting coefficients \hat{f}_0,\hat{f}_1,\ldots,\hat{f}_{n-1} to function values f_0,f_1,\ldots,f_{n-1}.

Fourier series are an immensely powerful tool in applied mathematics. For example again, if f represents a sound wave produced by a chord on a piano, its Fourier coefficients \hat{f} represents the intensity of each pitch comprising the chord. An audio engineer could, for example, compute a Fourier series for a piece of music and zero out Fourier coefficients, thus reducing the amount of data needed to store a piece of music. This idea is indeed part of the way audio compression standards like MP3 work. In addition to many more related applications in signal processing, the Fourier series is also a natural way to solve differential equations, either by pencil and paper or by computer via so-called Fourier spectral methods. As these applications (and more to follow) show, the DFT is a very useful computation to perform. The FFT allows us to perform this calculation fast.

The Fast Fourier Transform

The first observation to make is that Eq. (1) is a linear transformation: if we think of Eq. (1) as describing a transformation f \mapsto \hat{f}, then we have that \widehat{\alpha f + \beta g} = \alpha \hat{f} + \beta \hat{g}. Recall the crucial fact from linear algebra that every linear transformation can be represented by a matrix-vector muliplication.3At least in finite dimensions; the story for infinite-dimensional vector spaces is more complicated. In my experience, one of the most effective algorithm design strategies in applied mathematics is, when presented with a linear transformation, to write its matrix down and poke and prod it to see if there are any patterns in the numbers which can be exploited to give a fast algorithm. Let’s try to do this with the DFT.

We have that \hat{f} = F_n f for some n\times n matrix F_n. (We will omit the subscript n when its value isn’t particularly important to the discussion.) Let us make the somewhat non-standard choice of describing rows and columns of F by zero-indexing, so that the first row of F is row 0 and the last is row n-1. Then we have that \hat{f}_k = \sum_{j=0}^{n-1} F_{kj} f_j. Comparing with Eq. (1), we see that F_{kj} = \tfrac{1}{\sqrt{n}} e^{(-2\pi i/n) jk}. Let us define \omega_n = e^{-2\pi i/n}. Thus, we can write the matrix F out as

(3)   \begin{equation*} F_n = \frac{1}{\sqrt{n}} \begin{bmatrix}\omega_n^0 & \omega_n^0 & \omega_n^0 & \cdots & \omega_n^0 \\\omega_n^{0} & \omega_n^1 & \omega_n^2 & \cdots & \omega_n^{n-1} \\\omega_n^0 & \omega_n^2 & \omega_n^4 & \cdots & \omega_n^{2(n-1)} \\ \omega_n^0 & \omega_n^3 & \omega_n^6 & \cdots & \omega_n^{3(n-1)}\\ \vdots & \vdots & \vdots & \ddots & \vdots \\\omega_n^0 & \omega_n^{n-1} & \omega_n^{2(n-1)} & \cdots & \omega_n^{(n-1)(n-1)}\end{bmatrix} \end{equation*}

This is a highly structured matrix. The patterns in this matrix are more easily seen for a particular value of n. We shall focus on n = 8 in this discussion, but what will follow will generalize in a straightforward way to n any power of two (and in less straightforward ways to arbitrary n—we will return to this point at the end).

Instantiating Eq. (3) with n = 8 (and writing \omega = \omega_8), we have

(4)   \begin{equation*} F_8 = \frac{1}{\sqrt{8}} \begin{bmatrix}\omega^{0} &\omega^{0} &\omega^{0} &\omega^{0} &\omega^{0} &\omega^{0} &\omega^{0} &\omega^{0} \\ \omega^{0} &\omega^{1} &\omega^{2} &\omega^{3} &\omega^{4} &\omega^{5} &\omega^{6} &\omega^{7} \\ \omega^{0} &\omega^{2} &\omega^{4} &\omega^{6} &\omega^{8} &\omega^{10} &\omega^{12} &\omega^{14} \\ \omega^{0} &\omega^{3} &\omega^{6} &\omega^{9} &\omega^{12} &\omega^{15} &\omega^{18} &\omega^{21} \\ \omega^{0} &\omega^{4} &\omega^{8} &\omega^{12} &\omega^{16} &\omega^{20} &\omega^{24} &\omega^{28} \\ \omega^{0} &\omega^{5} &\omega^{10} &\omega^{15} &\omega^{20} &\omega^{25} &\omega^{30} &\omega^{35} \\ \omega^{0} &\omega^{6} &\omega^{12} &\omega^{18} &\omega^{24} &\omega^{30} &\omega^{36} &\omega^{42} \\ \omega^{0} &\omega^{7} &\omega^{14} &\omega^{21} &\omega^{28} &\omega^{35} &\omega^{42} &\omega^{49} \\ \end{bmatrix} \end{equation*}

To fully exploit the patterns in this matrix, we note that \omega represents a clockwise rotation of the complex plane by an eighth of the way around the circle. So, for example \omega^{21} is twenty-one eighths of a turn or simply just 21-16 = 5 turns. Thus \omega^{21} = \omega^5 and more generally \omega^m = \omega^{m \operatorname{mod} 8}. This allows us to simplify as follows:

(5)   \begin{equation*} F_8 = \frac{1}{\sqrt{8}}\begin{bmatrix}1 &1 &1 &1 &1 &1 &1 &1 \\ 1 &\omega^{1} &\omega^{2} &\omega^{3} &\omega^{4} &\omega^{5} &\omega^{6} &\omega^{7} \\ 1 &\omega^{2} &\omega^{4} &\omega^{6} &1 &\omega^{2} &\omega^{4} &\omega^{6} \\ 1 &\omega^{3} &\omega^{6} &\omega^{1} &\omega^{4} &\omega^{7} &\omega^{2} &\omega^{5} \\ 1 &\omega^{4} &1 &\omega^{4} &1 &\omega^{4} &1 &\omega^{4} \\ 1 &\omega^{5} &\omega^{2} &\omega^{7} &\omega^{4} &\omega^{1} &\omega^{6} &\omega^{3} \\ 1 &\omega^{6} &\omega^{4} &\omega^{2} &1 &\omega^{6} &\omega^{4} &\omega^{2} \\ 1 &\omega^{7} &\omega^{6} &\omega^{5} &\omega^{4} &\omega^{3} &\omega^{2} &\omega^{1} \\ \end{bmatrix} \end{equation*}

Now notice that, since \omega represents a clockwise rotation of an eighth of the way around the circle, \omega^2 represents a quarter turn of the circle. This fact leads to the surprising observation we can actually find the DFT matrix F_4 for n = 4 hidden inside the DFT matrix F_8 for n = 8!

To see this, rearrange the columns of F_8 to interleave every other column. In matrix language this is represented by right-multiplying with an appropriate4In fact, this permutation has a special name: the perfect shuffle. permutation matrix \Pi:

(6)   \begin{equation*} F_8 \Pi = \frac{1}{\sqrt{8}}\begin{bmatrix} 1 &1 &1 &1 &1 &1 &1 &1 \\ 1 &\omega^{2} &\omega^{4} &\omega^{6} &\omega^{1} &\omega^{3} &\omega^{5} &\omega^{7} \\ 1 &\omega^{4} &1 &\omega^{4} &\omega^2 &\omega^{6} &\omega^{2} &\omega^{6} \\ 1 &\omega^{6} &\omega^{4} &\omega^{2} &\omega^{3} &\omega^{1} &\omega^{7} &\omega^{5} \\ 1 &1 &1 &1 &\omega^4 &\omega^{4} &\omega^4 &\omega^{4} \\ 1 &\omega^{2} &\omega^{4} &\omega^{6} &\omega^{5} &\omega^{7} &\omega^{1} &\omega^{3} \\ 1 &\omega^{4} &1 &\omega^{4} &\omega^{6} &\omega^{2} & \omega^6 &\omega^{2} \\ 1 &\omega^{6} &\omega^{4} &\omega^2 & \omega^7 & \omega^5 & \omega^3 & \omega^1 \end{bmatrix} \end{equation*}

The top-left 4\times 4 sub-block is precisely F_4 (up to scaling). In fact, defining the diagonal matrix \Omega_4 = \operatorname{diag}(1,\omega,\omega^2,\omega^3) (called the twiddle factor) and noting that \omega^4 = -1, we have

(7)   \begin{equation*} F_8 \Pi = \frac{1}{\sqrt{2}} \begin{bmatrix} F_4 & \Omega_4 F_4 \\ F_4 & -\Omega_4 F_4 \end{bmatrix}. \end{equation*}

The matrix F_8 is entirely built up of simple scalings of the smaller DFT matrix F_4! This suggests the following decomposition to compute F_8x:

(8)   \begin{equation*} F_8 x = (F_8 \Pi)(\Pi^\top x) = \frac{1}{\sqrt{2}} \begin{bmatrix} F_4 & \Omega_4 F_4 \\ F_4 & -\Omega_4 F_4 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} \frac{F_4 x_1 + \Omega_4 (F_4 x_2)}{\sqrt{2}} \\ \frac{F_4 x_1 - \Omega_4 (F_4 x_2)}{\sqrt{2}}\end{bmatrix}. \end{equation*}

Here x_1 represent the even-indexed entries of x and x_2 the odd-indexed entries. Thus, we see that we can evaluate F_8 x by evaluating the two expressions F_4x_1 and F_4x_2. We have broken our problem into two smaller problems, which we then recombine into a solution of our original problem.

How then, do we compute the smaller DFTs F_4x_1 and F_4x_2? We just use the same trick again, breaking, for example, the product F_4x_1 into further subcomputations F_2x_{11} and F_2x_{12}. Performing this process one more time, we need to evaluate expressions of the form F_1x_{111}, which are simply given by F_1 x_{111}= x_{111} since the matrix F_1 is just a 1\times 1 matrix whose single entry is 1.

This procedure is an example of a recursive algorithm: we designed an algorithm which solves a problem by breaking it down into one or more smaller problems, solve each of the smaller problems by using this same algorithm, and then reassemble the solutions of the smaller problems to solve our original problem. Eventually, we will break our problems into such small pieces that they can be solved directly, which is referred to as the base case of our recursion. (In our case, the base case is multiplication by F_1). Algorithms using this recursion in this way are referred to as divide-and-conquer algorithms.

Let us summarize this recursive procedure we’ve developed. We want to compute the DFT y = F_n x where n is a power of two. First, we use the DFT to recursively compute F_{n/2}x_1 and F_{n/2}x_2. Next, we combine these computations to evaluate y = F_n x by the formula

(9)   \begin{equation*} F_n x = \begin{bmatrix} \frac{F_{n/2} x_1 + \Omega_4 (F_{n/2} x_2)}{\sqrt{2}} \\ \frac{F_{n/2} x_1 - \Omega_4 (F_{n/2} x_2)}{\sqrt{2}}\end{bmatrix}. \end{equation*}

This procedure is the famous fast Fourier transform (FFT), whose modern incarnation was presented by Cooley and Tukey in 1965 with lineage that can be traced back to work by Gauss in the early 1800s. There are many variants of the FFT using similar ideas.

Let us see why the FFT is considered “fast” by analyzing its operation count. As is common for divide-and-conquer algorithms, the number of operations for computing F_n x using the FFT can be determined by solving a certain recurrence relation. Let T(n) be the number of operations required by the FFT. Then the cost of computing F_n x consists of

  • proportional-to-n operations (or \mathcal{O}(n) operations, in computer science language5\mathcal{O}(\cdot) refers to big-O notation. Saying an algorithm takes \mathcal{O}(n\log n) operations is stating that, more or less, the algorithm takes less than some multiple of n\log n operations to complete.) to:
    • add, subtract, and scale vectors and
    • multiply by the diagonal matrix \Omega_{n/2} = \operatorname{diag}(1,\omega_{2n},\ldots,\omega_{2n}^{n-1}) and
  • two recursive computations of F_{n/2}x_j for j= 1,2, each of which requires T(n/2) operations.

This gives us the recurrence relation

(10)   \begin{equation*} T(n) = 2T(n/2) + \mathcal{O}(n). \end{equation*}

Solving recurrences is a delicate art in general, but a wide class of recurrences are immediately solved by the flexible master theorem for recurrences. Appealing to this result, we deduce that the FFT requires T(n) = \mathcal{O}(n\log n) operations. This is a dramatic improvement of the \mathcal{O}(n^2) operations to compute F_n x directly using Eq. (1). This dramatic improvement in speed is what makes the FFT “fast”.

Extending the FFT Idea

The FFT is a brilliant algorithm. It exploits the structure of the discrete Fourier transform problem Eq. (1) for dramatically lower operation counts. And as we shall see a taste of, the FFT is useful in a surprisingly broad range of applications. Given the success of the FFT, we are naturally led to the question: can we learn from our success with the FFT to develop fast algorithms for other problems?

I think the FFT speaks to the power of a simple problem-solving strategy for numerical algorithm design6As mentioned earlier, the FFT also exemplifies a typical design pattern in general (not necessarily numerical) algorithm design, the divide-and-conquer strategy. In the divide-and-conquer strategy, find a clever way of dividing a problem into multiple subproblems, conquering (solving) each, and then recombining the solutions to the subproblems into a solution of the larger problem. The challenge with such problems is often finding a way of doing the recombination step, which usually relies on some clever insight. Other instances of divide-and-conquer algorithms include merge sort and Karatsuba’s integer multiplication algorithm.: whenever you have a linear transformation, write it as a matrix-vector product; whenever you have a matrix, write it down and see if there are any patterns.7Often, rearranging the matrix will be necessary to see any patterns. We often like to present mathematics with each step of a derivation follows almost effortlessly from the last from a firm basis of elegant mathematical intuition. Often, however, noticing patterns by staring at symbols on a page can be more effective than reasoning grounded in intuition. Once the pattern has been discovered, intuition and elegance sometimes will follow quickly behind.

The most natural generalization of the FFT is the fast inverse discrete Fourier transform, providing a fast algorithm to compute the inverse discrete Fourier transform Eq. (2). The inverse FFT is quite an easy generalization of the FFT presented in the past section; it is a good exercise to see if you can mimic the development in the previous section to come up with this generalization yourself. The FFT can also be generalized to other discrete trigonometric transforms and 2D and 3D discrete Fourier transforms.

I want to consider a problem more tangentially related to the FFT, the evaluation of expressions of the form y = (A \otimes B)x, where A is an m_1\times n_1 matrix, B is an m_2\times n_2 matrix, x is a vector of length n_1n_2, and \otimes denotes the Kronecker product. For the unitiated, the Kronecker product of A and B is a m_1m_2\times n_1n_2 matrix defined as the block matrix

(11)   \begin{equation*} A\otimes B = \begin{bmatrix} A_{11} B & A_{12} B & \cdots & A_{1n_1} B\\ A_{21}B & A_{22} B & \cdots & A_{2n_1} B \\ \vdots & \vdots & \ddots & \vdots \\ A_{m_1 1}  B & A_{m_1 2} B & \cdots & A_{m_1 n_1} B\end{bmatrix}. \end{equation*}

We could just form this m_1 m_2\times n_1n_2 matrix and compute the matrix-vector product y = (A\otimes B)x directly, but this takes a hefty \mathcal{O}(m_1m_2n_1n_2) operations.8Equally or perhaps more problematically, this also takes \mathcal{O}(m_1m_2n_1n_2) space. We can do better.

The insight is much the same as with the FFT: scaled copies of the matrix B are embedded in A\otimes B. In the FFT, we needed to rearrange the columns of the DFT matrix F_n to see this; for the Kronecker product, this pattern is evident in the natural ordering. To exploit this fact, chunk the vectors x and y into pieces x_1,x_2,\ldots,x_{n_1} and y_1,y_2,\ldots,y_{m_1} of length n_2 and m_2 respectively so that our matrix vector product can be written as9This way of writing this expression is referred to as a conformal partitioning to indicate that one can multiply the block matrices using the ordinary matrix product formula treating the block entries as if they were simple numbers.

(12)   \begin{equation*} \underbrace{\begin{bmatrix} y_1 \\ y_2 \\ \vdots \\ y_{m_1} \end{bmatrix}}_{=y} = \underbrace{\begin{bmatrix} A_{11} B & A_{12} B & \cdots & A_{1n_1} B\\ A_{21}B & A_{22} B & \cdots & A_{2n_1} B \\ \vdots & \vdots & \ddots & \vdots \\ A_{m_1 1}  B & A_{m_1 2} B & \cdots & A_{m_1 n_1} B\end{bmatrix}}_{=A\otimes B} \underbrace{\begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_{n_1} \end{bmatrix}}_{=x}. \end{equation*}

To compute this product efficiently, we proceed in two steps. First, we compute the products Bx_1,Bx_2,\ldots,Bx_{n_1} which takes time \mathcal{O}(m_2n_2\cdot n_1) in total. Next, we compute each component y_1,\ldots,y_{m_1} by using the formula

(13)   \begin{equation*} y_j = A_{j1} (Bx_1) + A_{j2} (Bx_2) + \cdots + A_{jn_1} (Bx_{n_1}), \end{equation*}

which takes a total of \mathcal{O}(m_1 n_1 \cdot m_2) operations to compute all the y_j‘s. This leads to a total operation count of \mathcal{O}(m_2n_1(m_1+n_2)) for computing the matrix-vector product y = (A\otimes B)x, much better than our earlier operation count of \mathcal{O}(m_1m_2n_1n_2).10There is another way of interpreting this algorithm. If we interpret x and y as the vectorization of n_2\times n_1 and m_2 \times m_1 matrices X and Y, then we have Y = BXA^\top. The algorithm we presented is equivalent to evaluating this matrix triple product in the order Y = (BX)A^\top. This shows that this algorithm could be further accelerated using Strassenstyle fast matrix multiplication algorithms.

While this idea might seem quite far from the FFT, if one applies this idea iteratively, one can use this approach to rapidly evaluate a close cousin of the DFT called the Hadamard-Walsh transform. Using the Kronecker product, the Hadamard-Walsh transform \hat{f} of a vector f is defined to be

(14)   \begin{equation*} \hat{f} = \left( \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \otimes \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \otimes \cdots \otimes \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \right) f. \end{equation*}

If one applies the Kronecker product trick we developed repeatedly, this gives an algorithm to evaluate the Hadamard-Walsh transform of a vector f of length n in \mathcal{O}(n \log n) operations, just like the FFT.

The Hadamard-Walsh transform can be thought of as a generalization of the discrete Fourier transform to Boolean functions, which play an integral role in computer science. The applications of the Hadamard-Walsh transform are numerous and varied, from everything to voting systems to quantum computing. This is really just the tip of the iceberg. The ideas behind the FFT (and related ideas from the fast multipole method) allow for the rapid evaluation of a large number of transformations, some of which are connected by deep and general theories.

Resisting the temptation to delve into these interesting subjects in any more depth, we return to our main idea: when presented with a linear transformation, write it as a matrix-vector product; whenever you have a matrix, write it down and see if there are any patterns. The FFT exploits one such pattern, noticing that (after a reordering) a matrix contains many scaled copies of the same matrix. Rapidly evaluation expressions of the form y = (A\otimes B)x involves an even simpler application of the same idea. But there are many other patterns that can be exploited: sparsity, (approximate) low rank, off-diagonal blocks approximately of low rank, and displacement structure are other examples. Very often in applied math, our problems have additional structure that can be exploited to solve problems much faster, and sometimes finding that structure is as easy as just trying to look for it.

An Application of the FFT

A discussion of the FFT would be incomplete without exploring at least one reason why you’d want to compute the discrete Fourier transform. To focus our attention, let us consider another linear algebraic calculation which appears to have no relation to the FFT on its face: computing a matrix-vector product with a Toeplitz matrix. A matrix T is said to be Toeplitz if it has the following structure:

(15)   \begin{equation*} T = \begin{bmatrix} t_0 & t_1 & t_2 & \cdots & t_{n-1} \\ t_{-1} & t_0 & t_1 & \cdots & t_{n-2} \\ t_{-2} & t_{-1} & t_0 & \cdots & t_{n-3} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ t_{-(n-1)} & t_{-(n-2)} & t_{-(n-3)} & \cdots & t_0 \end{bmatrix}. \end{equation*}

Toeplitz matrices and their relatives appear widely across applications of applied mathematics including control and systems theory, time series, numerical partial differential equations, and signal processing.

We seek to compute the matrix-vector product y = Tx. Let us by considering a special case of a Toeplitz matrix, a circulant matrix. A circulant matrix C has the form

(16)   \begin{equation*} \begin{bmatrix} c_0 & c_{n-1} & c_{n-2} & \cdots & c_1 \\ c_1 & c_0 & c_{n-1} & \cdots & c_2\\ c_2 & c_1 & c_0 & \cdots & c_3 \\ \vdots & \vdots & \vdots &\ddots & \vdots \\ c_{n-1} & c_{n-2} & c_{n-3} & \cdots & c_0 \end{bmatrix}. \end{equation*}

By direct computation, the matrix-vector product y = Cx is given by

(17)   \begin{equation*} y_k = \sum_{j=0}^{n-1}x_j c_{\operatorname{mod}(k-j,n)}. \end{equation*}

A surprising and non-obvious fact is that the circulant matrix C is diagonalized by the discrete Fourier transform. Specifically, we have C = F_n^* \operatorname{diag}(\sqrt{n} F_n c) F_n where c=(c_0,\ldots,c_{n-1}). This gives a fast algorithm to compute y = Cx in time \mathcal{O}(n\log n): compute the DFTs of x and c and multiply them together entrywise, take the inverse Fourier transform, and scale by \sqrt{n}.

There is a connection with signal processing and differential equations that may help to shed light on why technique works for those familiar with those areas. In the signal processing context, the matrix-vector product y = Cx can be interpreted as the discrete convolution of x with c (see Eq. (17)) which is a natural extension of the convolution f* g of two functions f and g on the real line. It is an important fact that the Fourier transform of a convolution is the same as multiplication of the Fourier transforms: \widehat{f * g} = \hat{f} \cdot \hat{g} (up to a possible normalizing constant).11A related identity also holds for the Laplace transform. The fact that the DFT diagonalizes a circulant matrix is just the analog of this fact for the discrete Fourier transform and the discrete convolution.

This fast algorithm for circulant matrix-vector products is already extremely useful. One can naturally reframe the problems of multiplying integers and polynomials as discrete convolutions, which can then be computed rapidly by applying the \mathcal{O}(n\log n) algorithm for fast circulant matrix-vector products. This video gives a great introduction to the FFT with this as its motivating application.

Let’s summarize where we’re at. We are interested in computing the Toeplitz matrix-vector product y = Tx. We don’t know how to do this for a general Toeplitz matrix yet, but we can do it for a special Toeplitz matrix called a circulant matrix C. By use of the FFT, we can compute the circulant matrix-vector product y = Cx in \mathcal{O}(n\log n) operations.

We can now leverage what we’ve done with circulant matrices to accelerate Toeplitz matrix-vector product. The trick is very simple: embedding. We construct a big circulant matrix which contains the Toeplitz matrix as a sub-matrix and then use multiplications by the bigger matrix to compute multiplications by the smaller matrix.

Consider the following circulant matrix, which contains as T as defined in Eq. (15) a sub-matrix in its top-left corner:

(18)   \begin{equation*} C_T = \begin{bmatrix} t_0 & t_1  & \cdots & t_{n-1} & 0 & 0 & \cdots \\ t_{-1} & t_0 & \cdots & t_{n-2} & t_{n-1} & 0 & \cdots\\ \vdots & \vdots & \ddots & \vdots & \vdots &\vdots &\ddots\\ t_{-(n-1)} & t_{-(n-2)}  & \cdots & t_0 & t_1 & t_2&\cdots\\ 0 & t_{-(n-1)}&\cdots&t_{-1} & t_0 & t_1& \cdots\\ 0 & 0 & \cdots & t_{-2} & t_{-1} & t_0& \cdots\\ \vdots& \vdots & \ddots & \vdots & \vdots & \vdots &\ddots\\ 0 & 0 &\cdots&&&& \cdots\\ t_{n-1} & 0 & \cdots&&&&\cdots\\ t_{n-2} & t_{n-1} & \cdots &&&&\cdots\\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots &\ddots\\ t_1 & t_2 & \cdots & 0 & 0 & 0 & \cdots \end{bmatrix} \end{equation*}

This matrix is hard to write out, but essentially we pad the Toeplitz matrix with extra zeros to embed it into a circulant matrix. The “c” vector for this larger circulant matrix is obtained from the parameters t_0,t_{\pm 1},\ldots,t_{\pm (n-1)} of the Toeplitz matrix Eq. (15) by c = (t_0,t_{-1},\ldots,t_{-(n-1)},0,\ldots,0,t_{n-1},t_{n-2},\ldots,t_1).

Here comes another clever observation: we can choose the number of padding zeros used cleverly to make the size of C_T exactly equal to a power of two. This is useful because it allows us to compute matrix-vector products w=C_Tz with the power-of-two FFT described above, which we know is fast.

Finally, let’s close the loop and use fast multiplications with C_T to compute fast multiplications with T. We wish to compute the product y = Tx fast. To do this, vector x into a larger vector z by padding with zeros to get

(19)   \begin{equation*} C_T z = \begin{bmatrix} T & \star \\ \star & \star \end{bmatrix} \begin{bmatrix} x \\ 0 \end{bmatrix} = \begin{bmatrix} y \\ \star \end{bmatrix}, \end{equation*}

where we use \star to denote matrix or vector entries which are immaterial to us. We compute Tx by using our fast algorithm to compute C_T z = w and then discarding everything but the first entries of w to obtain y. If you’re careful to analyze how much padding we need to make this work, we see that this algorithm also takes only \mathcal{O}(n \log n) operations. Thus, we’ve completed our goal: we can compute Toeplitz matrix-vector products in a fast \mathcal{O}(n\log n) operations.

Finally, let us bring this full circle and see a delightfully self-referential use of this algorithm: we can use the FFT-accelerated fast Toeplitz matrix-vector multiply to compute DFT itself. Recall that the FFT algorithm we presented above was particularized to n which were powers of 2. There are natural generalizations of the along the lines of what we did above to more general n which are highly composite and possess many small prime factors. But what if we want to evaluate the DFT for n which is a large prime?

Recall that the DFT matrix F_n has jkth entry (F_n)_{jk} = \tfrac{1}{\sqrt{n}} \omega_n^{jk}. We now employ a clever trick. Let D be a diagonal matrix with the jjth entry equal to \omega_n^{-\tfrac{1}{2} j^2}. Then, defining T = D F_n D, we have that T_{jk} = \tfrac{1}{\sqrt{n}} \omega_n^{-\tfrac{1}{2} j^2 + jk - \tfrac{1}{2} k^2} = \tfrac{1}{\sqrt{n}} \omega_n^{-\tfrac{1}{2}(j-k)^2}, which means T is a Toeplitz matrix! (Writing out the matrix T entrywise may be helpful to see this.)

Thus, we can compute the DFT \hat{f} = F_n f for any size n by evaluating the DFT as \hat{f} = D^{-1}(T(D^{-1} f)), where the product Tx is computed using the fast Toeplitz matrix-vector product. Since our fast Toeplitz matrix-vector product only requires us to evaluate power-of-two DFTs, this technique allows us to evaluate DFTs of arbitrary size n in only \mathcal{O}(n\log n) operations.

Upshot: The discrete Fourier transform (DFT) is an important computation which occurs all across applied mathematics. The fast Fourier transform (FFT) reduces the operation count of evaluating the DFT of a vector of length n to proportional to n \log n, down from proportional to n^2 for direct evaluation. The FFT is an example of a broader matrix algorithm design strategy of looking for patterns in the numbers in a matrix and exploiting these patterns to reduce computation. The FFT can often have surprising applications, such as allowing for rapid computations with Toeplitz matrices.

The Better Way to Convert an SVD into a Symmetric Eigenvalue Problem

A singular value decomposition of an m\times n matrix B is a factorization of the form B = U\Sigma V^\top, where U and V are square, orthogonal matrices and \Sigma is a diagonal matrix with (i,i)th entry \sigma_i \ge 0.1Everything carries over essentially unchanged for complex-valued matrices B with U and V being unitary matrices and every (\cdot)^\top being replaced by (\cdot)^* for (\cdot)^* the Hermitian transpose. The diagonal entries of \Sigma are referred to as the singular values of B and are conventionally ordered \sigma_{\rm max} = \sigma_1 \ge \sigma_2 \ge \cdots \ge \sigma_{\min(m,n)} = \sigma_{\rm min}. The columns of the matrices U and V are referred to as the right- and left- singular vectors of B and satisfy the relations Bv_i = \sigma_i u_i and B^\top u_i = \sigma_i v_i.

One can obtain the singular values and right and left singular vectors of B from the eigenvalues and eigenvectors of B^\top B and BB^\top. This follows from the calculations B^\top B = V\Sigma^2 V^\top and B^\top B = U\Sigma^2 U^\top. In other words, the nonzero singular values of B are the square roots of the nonzero eigenvalues of B^\top B and BB^\top. If one merely solves one of these problems, computing \Sigma along with U or V, one can obtain the other matrix V or U by computing U = BV \Sigma^{-1} or V = B^\top U \Sigma^{-1}. (These formulas are valid for invertible square matrices B, but similar formulas hold for singular or rectangular B to compute the singular vectors with nonzero singular values.)

This approach is often unundesirable for several reasons. Here are a few I’m aware of:

  1. Accuracy: Roughly speaking, in double-precision arithmetic, accurate stable numerical methods can resolve differences on the order of 16 orders of magnitude. This means an accurately computed SVD of B can resolve the roughly 16 orders of magnitude of decaying singular values, with singular values smaller than that difficult to compute accurately. By computing B^\top B, we square all of our singular values, so resolving 16 orders of magnitude of the eigenvalues of B^\top B means we only resolve 8 orders of magnitude of the singular values of B.2Relatedly, the two-norm condition number \kappa(B) := \sigma_{\rm max}(B) / \sigma_{\rm min}(B) of B^\top B is twice that of B. The dynamic range of our numerical computations has been cut in half!
  2. Loss of orthogonality: While U = BV \Sigma^{-1} and V = B^\top U \Sigma^{-1} are valid formulas in exact arithmetic, they fair poorly when implemented numerically. Specifically, the numerically computed values U_{\rm numerical} and V_{\rm numerical} may not be orthogonal matrices with, for example, U_{\rm numerical}^\top U_{\rm numerical} not even close to the identity matrix. One can, of course, orthogonalize the computed U or V, but this doesn’t fix the underlying problem that U or V have not been computed accurately.
  3. Loss of structure: If B possesses additional structure (e.g. sparsity), this structure may be lost or reduced by computing the product B^\top B.
  4. Nonlinearity: Even if we’re not actually computing the SVD numerically but doing analysis with pencil and paper, finding the SVD of B from B^\top B has the disadvantage of performing a nonlinear transformation on B. This prevents us from utilizing additive perturbation theorems for sums of symmetric matrices in our analysis.3For instance, one cannot prove Weyl’s perturbation theorem for singular values by considering B^\top B and applying Weyl’s perturbation theorem for symmetric eigenvalues.

There are times where these problems are insignificant and this approach is sensible: we shall return to this point in a bit. However, these problems should disqualify this approach from being the de facto way we reduce SVD computation to a symmetric eigenvalue problem. This is especially true since we have a better way.

The better way is by constructing the so-called Hermitian dilation4As Stewart and Sun detail in Section 4 of Chapter 1 of their monograph Matrix Perturbation Theory, the connections between the Hermitian dilation and the SVD go back to the discovery of the SVD itself, as it is used in Jordan’s construction of the SVD in 1874. (The SVD was also independently discovered by Beltrami the year previous.) Stewart and Sun refer to this matrix as the Jordan-Wiedlant matrix associated with B, as they attribute the widespread use of the matrix today to the work of Wiedlant. We shall stick to the term Hermitian dilation to refer to this matrix. of B, which is defined to be the matrix

(1)   \begin{equation*} \mathcal{H}(B) = \begin{bmatrix} 0 & B \\ B^\top & 0 \end{bmatrix}. \end{equation*}

One can show that the nonzero eigenvalues of \mathcal{H}(B) are precisely plus-or-minus the singular values of B. More specifically, we have

(2)   \begin{equation*} \mathcal{H}(B) \begin{bmatrix} u_i \\ \pm v_i \end{bmatrix} = \pm \sigma_i \begin{bmatrix} u_i \\ \pm v_i \end{bmatrix}. \end{equation*}

All of the remaining eigenvalues of \mathcal{H}(B) not of this form are zero.5This follows by noting \operatorname{rank}(\mathcal{H}(B)) = 2\operatorname{rank}(B) and thus \pm \sigma_i for i = 1,2,\ldots,\operatorname{rank}(B) account for all the nonzero eigenvalues of \mathcal{H}(B). Thus, the singular value decomposition of B is entirely encoded in the eigenvalue decomposition of \mathcal{H}(B).

This approach of using the Hermitian dilation \mathcal{H}(B) to compute the SVD of B fixes all the issues identified with the “B^\top B” approach. We are able to accurately resolve a full 16 orders of magnitude of singular values. The computed singular vectors are accurate and numerically orthogonal provided we use an accurate method for the symmetric eigenvalue problem. The Hermitian dilation \mathcal{H}(B) preserves important structural characteristics in B like sparsity. For purposes of theoretical analysis, the mapping B \mapsto \mathcal{H}(B) is linear.6The linearity of the Hermitian dilation gives direct extensions of most results about the symmetric eigenvalues to singular values; see Exercise 22.

Often one can work with the Hermitian dilation only implicitly: the matrix \mathcal{H}(B) need not actually be stored in memory with all its extra zeros. The programmer designs and implements an algorithm with \mathcal{H}(B) in mind, but deals with the matrix B directly for their computations. In a pinch, however, forming \mathcal{H}(B) directly in software and utilizing symmetric eigenvalue routines directly is often not too much less efficient than a dedicated SVD routine and can cut down on programmer effort significantly.

As with all things in life, there’s no free lunch here. There are a couple of downsides to the Hermitian dilation approach. First, \mathcal{H}(B) is, except for the trivial case B = 0, an indefinite symmetric matrix. By constast, B^\top B and BB^\top are positive semidefinite, which can be helpful in some contexts.7This is relevant if, say, we want to find the small singular values by inverse iteration. Positive definite linear systems are easier to solve by either direct or iterative methods. Further, if n\ll m (respectively, m \ll n), then B^\top B (respectively, BB^\top) is tiny compared to \mathcal{H}(B), so it might be considerably cheaper to compute an eigenvalue decomposition of B^\top B (or BB^\top) than \mathcal{H}(B).

Despite the somewhat salacious title of this article, the B^\top B and Hermitian dilation approaches both have their role, and the purpose of this article is not to say the B^\top B approach should be thrown in the dustbin. However, in my experience, I frequently hear the B^\top B approach stated as the definitive way of converting an SVD into an eigenvalue problem, with the Hermitian dilation approach not even mentioned. This, in my opinion, is backwards. For accuracy reasons alone, the Hermitian dilation should be the go-to tool for turning SVDs into symmetric eigenvalue problems, with the B^\top B approach only used when the problem is known to have singular values which don’t span many orders of magnitude or B is tall and skinny and the computational cost savings of the B^\top B approach are vital.

Big Ideas in Applied Math: Galerkin Approximation

My first experience with the numerical solution of partial differential equations (PDEs) was with finite difference methods. I found finite difference methods to be somewhat fiddly: it is quite an exercise in patience to, for example, work out the appropriate fifth-order finite difference approximation to a second order differential operator on an irregularly spaced grid and even more of a pain to prove that the scheme is convergent. I found that I liked the finite element method a lot better1Finite element methods certainly have their own fiddly-nesses (as anyone who has worked with a serious finite element code can no doubt attest to). as there was a unifying underlying functional analytic theory, Galerkin approximation, which showed how, in a sense, the finite element method computed the best possible approximate solution to the PDE among a family of potential solutions. However, I came to feel later that Galerkin approximation was, in a sense, the more fundamental concept, with the finite element method being one particular instantiation (with spectral methods, boundary element methods, and the conjugate gradient method being others). In this post, I hope to give a general introduction to Galerkin approximation as computing the best possible approximate solution to a problem within a certain finite-dimensional space of possibilities.

Systems of Linear Equations

Let us begin with a linear algebraic example, which is unburdened by some of the technicalities of partial differential equations. Suppose we want to solve a very large system of linear equations Ax = b, where the matrix A is symmetric and positive definite (SPD). Suppose that A is N\times N where N is so large that we don’t even want to store all N components of the solution x on our computer. What can we possibly do?

One solution is to consider only solutions x lying in a subspace \mathcal{X} of the set of all possible solutions \mathbb{R}^N. If this subspace has a basis x_1,x_2,\ldots,x_M \in \mathcal{X}, then the solution x \in \mathcal{X} can be represented as x = a_1x_1 + \cdots + a_Mx_M and one only has to store the M < N numbers a_1,\ldots,a_M. In general, x will not belong to the subspace \mathcal{X} and we must settle for an approximate solution \hat{x} \in \mathcal{X}.

The next step is to convert the system of linear equations Ax = b into a form which is more amenable to approximate solution on a subspace \mathcal{X}. Note that the equation Ax = b encodes n different linear equations a_i^\top x = b_i where a_i^\top is the ith row of A and b_i is the ith element of b. Note that the ith equation is equivalent to the condition e_i^\top A x = e_i^\top b, where e_i is the vector with zeros in all entries except for the ith entry which is a one. More generally, by multiplying the equation Ax = b by an arbitrary test row vector y^\top, we get y^\top Ax = y^\top b for all y \in \mathbb{R}^N. We refer to this as a variational formulation of the linear system of equations Ax = b. In fact, one can easily show that the variational problem is equivalent to the system of linear equations:

(1)   \begin{equation*} Ax = b \:\mbox{ if, and only if, }\: y^\top Ax = y^\top b \mbox{ for every } y \in \mathbb{R}^N. \end{equation*}

Since we are seeking an approximate solution from the subspace \mathcal{X}, it is only natural that we also restrict our test vectors y to lie in the subspace \mathcal{X}. Thus, we seek an approximate solution \hat{x} to the system of equations Ax = b as the solution of the variational problem

(2)   \begin{equation*} y^\top A\hat{x} = y^\top b \mbox{ for every } y \in\mathcal{X}. \end{equation*}

One can relatively easily show this problem possesses a unique solution \hat{x}.2Here is a linear algebraic proof. As we shall see below, the same conclusion will also follow from the general Lax-Milgram theorem. Let P be a matrix whose columns form a basis for \mathcal{X}. Then every y\in \mathcal{X} can be written as y = Pu for some u \in \mathbb{R}^M. Thus, writing \hat{x} = Pw, we have that y^\top A\hat{x} = u^\top P^\top A P w = u^\top P^\top b for every u\in \mathbb{R}^M. But this is just a variational formulation of the equation (P^\top A P)w = P^\top b. The matrix P^\top A P is SPD since v^\top(P^\top A P)v = (Pv)^\top A (Pv) > 0 for v\ne 0 since A is SPD. Thus (P^\top A P)w = P^\top b has a unique solution w = (P^\top A P)^{-1}P^\top b. Thus \hat{x} = Pw = P(P^\top AP)^{-1}P^\top b is the unique solution to the variational problem Eq. (2). In what sense is \hat{x} a good approximate solution for Ax = b? To answer this question, we need to introduce a special way of measuring the error to an approximate solution to Ax = b. We define the A-inner product of a vector x and y to be \langle x, y \rangle_A := y^\top Ax and the associated A-norm \|x\|_A = \sqrt{\langle x, x\rangle_A} = \sqrt{x^\top A x}.3Note that A-norm can be seen as a weighted Euclidean norm, where the components of the vector x in the direction of the eigenvectors of A are scaled by their corresponding eigenvector. Concretely, if x = a_1 q_1 + \cdots + a_N q_N where q_j is an eigenvector of A with eigenvalue \lambda_j (Aq_j = \lambda_j q_j), then we have \|x\|_A^2 = \lambda_1 a_1^2 + \cdots + \lambda_N a_N^2. All of the properties satisfied by the familiar Euclidean inner product and norm carry over to the new A-inner product and norm (e.g., the Pythagorean theorem). Indeed, for those familiar, one can show \langle \cdot, \cdot \rangle_A satisfies all the axioms for an inner product space.

We shall now show that the error x - \hat{x} between x and its Galerkin approximation \hat{x} is A-orthogonal to the space \mathcal{X} in the sense that \langle y, x - \hat{x}\rangle_A = 0 for all y \in \mathcal{X}. This follows from the straightforward calculation, for y \in \mathcal{X},

(3)   \begin{equation*} \langle y, x - \hat{x} \rangle_A = y^\top A (x - \hat{x}) = y^\top A x - y^\top A \hat{x} = y^\top b - y^\top b = 0, \end{equation*}

where y^\top A x = y^\top b since x solves the variational problem Eq. (1) and y^\top A \hat{x} = y^\top b since x solves the variational problem Eq. (2).

The fact that the error x - \hat{x} is A-orthogonal to \mathcal{X} can be used to show that \hat{x} is, in a sense, the best approximate solution to Ax = b in the subspace \mathcal{X}. First note that, for any approximate solution z \in \mathcal{X} to Ax = b, the vector \hat{x} - z \in \mathcal{X} is A-orthogonal to x - \hat{x}. Thus, by the Pythagorean theorem,

(4)   \begin{equation*} \|x - z\|^2_A = \|(x - \hat{x}) + (\hat{x}-z)\|_A^2 = \|x-\hat{x}\|_A^2 + \|\hat{x} - z\|_A^2 \ge \|x - \hat{x} \|_A^2. \end{equation*}

Thus, the Galerkin approximation \hat{x} is the best approximate solution to Ax = b in the subspace \mathcal{X} with respect to the A-norm, \|x - z\|_A \ge \|x - \hat{x} \|_A for every z \in \mathcal{X}. Thus, if one picks a subspace \mathcal{X} for which the solution x almost lies in \mathcal{X},4In the sense that \inf_{z \in \mathcal{X}} \|x - z\| is small then \hat{x} will be a good approximate solution to Ax = b, irrespective of the size of the subspace \mathcal{X}.

Variational Formulations of Differential Equations

As I hope I’ve conveyed in the previous section, Galerkin approximation is not a technique that only works for finite element methods or even just PDEs. However, differential and integral equations are one of the most important applications of Galerkin approximation since the space of all possible solution to a differential or integral equation is infinite-dimensional: approximation in a finite-dimensional space is absolutely critical. In this section, I want to give a brief introduction to how one can develop variational formulations of differential equations amenable to Galerkin approximation. For simplicity of presentation, I shall focus on a one-dimensional problem which is described by an ordinary differential equation (ODE) boundary value problem. All of this generalized wholesale to partial differential equations in multiple dimensions, though there are some additional technical and notational difficulties (some of which I will address in footnotes). Variational formulation of differential equations is a topic with important technical subtleties which I will end up brushing past. Rigorous references are Chapters 5 and 6 from Evans’ Partial Differential Equations or Chapters 0-2 from Brenner and Scott’s The Mathematical Theory of Finite Element Methods.

As our model problem for which we seek a variational formulation, we will focus on the one-dimensional Poisson equation, which appears in the study of electrostatics, gravitation, diffusion, heat flow, and fluid mechanics. The unknown u is a real-valued function on an interval which take to be [0,1].5In higher dimensions, one can consider an arbitrary domain \Omega \subseteq \mathbb{R}^d with, for example, a Lipschitz boundary. We assume Dirichlet boundary conditions that u is equal to zero on the boundary u(0) = u(1) = 0.6In higher dimensions, one has u = 0 on the boundary \partial \Omega of the region \Omega. Poisson’s equations then reads7-\Delta u = f on \Omega and u = 0 for higher dimensions, where \Delta is the Laplacian operator.

(5)   \begin{equation*} -u''(x) = f(x) \mbox{ for every } x \in (0,1), \quad u(0) = u(1) = 0. \end{equation*}

We wish to develop a variational formulation of this differential equation, similar to how we develop a variational formulation of the linear system of equations in the previous section. To develop our variational formulation, we take inspiration from physics. If u(x) represents, say, the temperature at a point x, we are never able to measure u(x) exactly. Rather, we can measure the temperature in a region around x with a thermometer. No matter how carefully we engineer our thermometer, our thermometer tip will have some volume occupying a region R in space. The temperature u_{\rm meas} measured by our thermometer will be the average temperature in the region R or, more generally, a weighted average u_{\rm meas} = \int_0^1 u(x) v(x) \, dx where v(\cdot) is a weighting function which is zero outside the region R. Now let’s use our thermometer to “measure” our differential equation:

(6)   \begin{equation*} \int_0^1-v(x) u''(x) \, dx = \int_0^1 v(x)f(x) \, dx. \end{equation*}

This integral expression is some kind of variational formulation of our differential equation, as it is an equation involving the solution to our differential equation u which must hold for every averaging function v. (The precise meaning of every will be forthcoming.) It will benefit us greatly to make this expression more “symmetric” with respect to u and v. To do this, we shall integrate by parts:8Integrating by parts is harder in higher dimensions. My personal advice for integrating by parts in higher dimensions is to remember that integration by parts is ultimately a result of the product rule. As such, to integrate by parts, we first write an expression involving our integrand using the product rule of some differential operator and then integrate by both sides. In this case, notice that \nabla \cdot (v \nabla u) = v\Delta u + \nabla v \cdot \nabla u. Rearranging and integrating, we see that \int_\Omega -v\Delta u \, dx = \int_\Omega \nabla v \cdot \nabla u \, dx - \int_\Omega \nabla \cdot (v\nabla u) \, dx. We then apply the divergence theorem to the last term to get \int_\Omega \nabla \cdot (v\nabla u) \, dx = \int_{\partial \Omega} v \nabla u \cdot n dS(x), where n represents an outward facing unit normal to the boundary \partial \Omega and dS(x) represents integration on the surface \partial \Omega. If v is zero on \partial \Omega, we conclude \int_\Omega \nabla v \cdot \nabla u \, dx  = \int_\Omega v f \, dx for all nice functions v on \Omega satisfying v = 0 on \partial \Omega.

(7)   \begin{equation*} \int_0^1-v(x)u''(x) \, dx = \int_0^1 v'(x)u'(x) \, dx - v(0) u'(0) - v(1)u'(1). \end{equation*}

In particular, if v is zero on the boundary v(0) = v(1) = 0, then the second two terms vanish and we’re left with the variational equation

(8)   \begin{equation*} \int_0^1 v'(x)u'(x) \, dx = \int_0^1 v(x) f(x) \, dx \mbox{ for all \textit{nice} functions $v$ on $[0,1]$ with } v(0) = v(1) = 0. \end{equation*}

Compare the variational formulation of the Poisson equation Eq. (8) to the variational formulation of the system of linear equations Ax = b in Eq. (1). The solution vector x in the differential equation context is a function u satisfying the boundary condition of u being zero on the boundary u(0) = u(1) = 0. The right-hand side b is replaced by a function f on the interval [0,1]. The test vector y is replaced by a test function v on the interval [0,1]. The matrix product expression y^\top A x is replaced by the integral \int_0^1 v'(x)u'(x) \, dx. The product y^\top b is replaced by the integral \int_0^1 v(x) f(x) \, dx. As we shall soon see, there is a unifying theory which treats both of these contexts simultaneously.

Before this unifying theory, we must address the question of which functions v we will consider in our variational formulation. One can show that all of the calculations we did in this section hold if v is a continuously differentiable function on [0,1] which is zero away from the endpoints 0 and 1 and u is a twice continuously differentiable function on [0,1]. Because of technical functional analytic considerations, we shall actually want to expand the class of functions in our variational formulation to even more functions v. Specifically, we shall consider all functions v which are (A) square-integrable (\int_0^1|v(x)|^2 \,dx is finite), (B) possess a square integrable derivative9More specifically, we only insist that v possess a square-integrable weak derivative. v' (\int_0^1|v'(x)|^2 \,dx is finite), and (C) are zero on the boundary. We refer to this class of functions as the Sobolev space H_0^1((0,1)).10The class of functions satisfying (A) and (B) but not necessarily (C) is the Sobolev space H^1((0,1)). For an arbitrary function in H^1((0,1)), the existence of a well-defined restriction to the boundary 0 and 1 is actually nontrivial to show, requiring showing the existence of a trace operator. Chapter 5 of Evan’s Partial Differential Equations is a good introduction to Sobolev spaces. The Sobolev spaces H_0^1((0,1)) and H^1((0,1)) naturally extend to spaces H_0^1(\Omega) and H^1(\Omega) for an arbitrary domain \Omega \subseteq \mathbb{R}^d with a nice boundary.

Now this is where things get really strange. Note that it is possible for a function u to satisfy the variational formulation Eq. (8) but for u not to satisfy the Poisson equation Eq. (5). A simple example is when f possesses a discontinuity (say, for example, a step discontinuity where f is 0 and then jumps to 1). Then no continuously differentiable u will satisfy Eq. (5) at every point in \Omega and yet a solution u to the variational problem Eq. (8) exists! The variational formulation actually allows us to give a reasonable definition of “solving the differential equation” when a classical solution to -u'' = f does not exist. Our only requirement for the variational problem is that u, itself, belongs to the space H_0^1((0,1)). A solution to the variational problem Eq. (8) is called a weak solution to the differential equation Eq. (5) because, as we have argued, a weak solution to Eq. (8) need not always solve Eq. (5).11One can show that any classical solution to Eq. (5) solves Eq. (8). Given certain conditions on f, one can go the other way, showing that weak solutions are indeed bonafide classical solutions. This is the subject of regularity theory.

The Lax-Milgram Theorem

Let us now build up an abstract language which allows us to use Galerkin approximation both for linear systems of equations and PDEs (as well as other contexts). If one compares the expressions y^\top A x from the linear systems context and \int_0^1 v'(x)u'(x) \, dx from the differential equation context, one recognizes that both these expressions are so-called bilinear forms: they depend on two arguments (x and y or u and v) and are a linear transformation in each argument independently if the other one is fixed. For example, if one defines a(x,y) = y^\top A x one has a(x,\alpha_1 y_1 + \alpha_2 y_2) = \alpha_1 a(x,y_1) + \alpha_2 a(x,y_2). Similarly, if one defines a(u,v) = \int_0^1 v'(x)u'(x) \, dx, then a(\alpha_1 u_1 + \alpha_2 u_2,v) = \alpha_1 a(u_1,v) + \alpha_2 a(u_2, v).

Implicitly swimming in the background is some space of vectors or function which this bilinear form a(\cdot,\cdot) is defined upon. In the linear system of equations context, this space \mathbb{R}^N of N-dimensional vectors and in the differential context, this space is H_0^1((0,1)) as defined in the previous section.12The connection between vectors and functions is even more natural if one considers a function u : [0,1] \to \mathbb{R} as a vector of infinite length, with one entry for each real number x \in [0,1]. Call this space \mathcal{V}. We shall assume that \mathcal{V} is a special type of linear space called a Hilbert space, an inner product space (with inner product \langle \cdot, \cdot \rangle_\mathcal{V}) where every Cauchy sequence converges to an element in \mathcal{V} (in the inner product-induced norm).13Note that every inner product space has a unique completion to a Hilbert space. For example, if one considers the space C_{0}^\infty((0,1)) of C^\infty smooth functions which are zero away from the boundary of (0,1) with the inner product \langle u, v \rangle_{H^1_0((0,1))} = \int_0^1 u(x)v(x) + u'(x)v'(x) \, dx, the completion is H_0^1((0,1)). A natural extension to higher dimensions hold. The Cauchy sequence convergence property, also known as metric completeness, is important because we shall often deal with a sequence of entries u_1,u_2,\ldots \in \mathcal{V} which we will need to establish convergence to a vector u \in \mathcal{V}. (Think of u_1,u_2,\ldots as a sequence of Galerkin approximations to a solution u.)

With these formalities, an abstract variational problem takes the form

(9)   \begin{equation*} \mbox{Find $u \in \mathcal{V}$ such that }a(u,v) = \ell(v) \mbox{ for all } v \in \mathcal{V}, \end{equation*}

where a(\cdot,\cdot) is a bilinear form on \mathcal{V} and \ell(\cdot) is a linear form on \mathcal{V} (a linear map \ell: \mathcal{V} \to \mathbb{R}). There is a beautiful and general theorem called the Lax-Milgram theorem which establishes existence and uniqueness of solutions to a problem like Eq. (9).

Theorem (Lax-Milgram): Let a(\cdot,\cdot) and f satisfy the following properties:

  1. (Boundedness of a) There exists a constant C \ge 0 such that every u,v \in \mathcal{V}, |a(u,v)| \le C \|u\|_{\mathcal{V}}\|v\|_{\mathcal{V}}.
  2. (Coercivity) There exists a positive constant c > 0 such that a(u,u) \ge c \|u\|_{\mathcal{V}}^2 for every u \in \mathcal{V}.
  3. (Boundedness of \ell) There exists a constant K such that |\ell(v)| \le K \|v\|_{\mathcal{V}} for every v\in \mathcal{V}.

Then the variational problem Eq. (9) possesses a unique solution.

For our cases, a will also be symmetric a(u,v) = a(v,u) for all u,v \in \mathcal{V}. While the Lax-Milgram theorem holds without symmetry, let us continue our discussion with this additional symmetry assumption. Note that, taken together, properties (1-2) say that the a-inner product, defined as \langle u, v \rangle_a = a(u,v), is no more than so much bigger or smaller than the standard inner product \langle u, v\rangle_{\mathcal{V}} of u and v.14That is, one has that the a-norm and the \mathcal{V}-norm are equivalent in the sense that \sqrt{c}\|v\|_{\mathcal{V}} \le \|v\|_a \le \sqrt{C} \|v\|_{\mathcal{V}}. so the norms \|\cdot\|_\mathcal{V} and \|\cdot\|_a define the same topology.

Let us now see how the Lax-Milgram theorem can apply to our two examples. For a reader who wants a more “big picture” perspective, they can comfortably skip to the next section. For those who want to see Lax-Milgram in action, see the discussion below.

Applying the Lax-Milgram Theorem
Begin with the linear system of equations with \mathcal{V} = \mathbb{R}^N with inner product \langle x, y \rangle_{\mathbb{R}^N} = y^\top x, a(x,y) = y^\top Ax, and \ell(y) = y^\top b. Note that we have the inequality \lambda_{\rm min} x^\top x \le x^\top A x \le \lambda_{\rm max} x^\top x.15This is an easy consequence of the Courant-Fischer theorem. More generally, note that, since A is symmetric, A has an orthonormal basis of eigenvectors q_1,\ldots,q_N with eigenvalues \lambda_{\rm max} = \lambda_1 \ge \lambda_2 \ge \cdots \ge \lambda_N = \lambda_{\rm min}. Then x = x_1q_1 + \cdots + x_Nq_N and x^\top Ax = \lambda_{\rm max} x_1^2 + \cdots + \lambda_{\rm min} x_N^2. The inequalities follow from noting the Parseval relation \|x\|^2 = x^\top x = x_1^2 + \cdots + x_N^2 and noting that x^\top Ax / x^\top x is a convex combination of the eigenvalues of A. In particular, we have that \|x\|_A = \sqrt{x^\top A x} \le \sqrt{\lambda_{\rm max}} \|x\|. Property (1) then follows from the Cauchy-Schwarz inequality applied to the A-inner product: |a(x,y)| = |\langle x, y\rangle_A| \le \|x\|_A \|y\|_A \le \lambda_{\rm max} \|x\|\|y\|. Property (2) is simply the established inequality a(x,x) = x^\top Ax \ge \lambda_{\min} x^\top x. Property (3) also follows from the Cauchy-Schwarz inequality: |\ell(y)| = |y^\top b| \le \|b\|\|y\|. Thus, by Lax-Milgram, the variational problem y^\top Ax = y^\top b for y \in \mathbb{R}^N has a unique solution x. Note that the linear systems example shows why the coercivity property (2) is necessary. If A is positive semi-definite but not positive-definite, then there exists an eigenvector v of A with eigenvalue 0. Then v^\top Av = 0 \not\ge c v^\top v for any positive constant c and A is singular, so the variational formulation of Ax = b has no solution for some choices of the vector b.

Applying the Lax-Milgram theorem to differential equations can require powerful inequalities. In this case, the \mathcal{V} = H_0^1((0,1))-inner product is given by \langle u, v \rangle_{H_0^1((0,1))} = \int_0^1 v(x)u(x) + v'(x)u'(x) \, dx, a(u,v) = \int_0^1 v'(x)u'(x) \, dx, and \ell(v) = \int_0^1 v(x)f(x) \, dx. Condition (1) is follows from a application of the Cauchy-Schwarz inequality for integrals:16 In higher dimensions, we need even more Cauchy-Schwarz! First, we note that the absolute value of integral is less than the integral of absolute value |a(u,v)| \le \int_\Omega |\nabla v \cdot \nabla u| \, dx. Second, we apply the Cauchy-Schwarz inequality for the vectors \nabla u and \nabla v to get |\nabla v \cdot \nabla u| \le |\nabla v| |\nabla u| where, e.g., \|\nabla v\| is the Euclidean norm of the vector \nabla v. This gives |a(u,v)| \le \int_\Omega \|\nabla v\| \|\nabla u\| \, dx. Next, we apply the Cauchy-Schwarz inequality for integrals to get |a(u,v)| \le \le \left(\int_\Omega \|\nabla v\|^2 \, dx\right)^{1/2} \left(\int_\Omega \|\nabla u\|^2 \, dx\right)^{1/2}. Finally, we note that \int_\Omega \|\nabla v\|^2 \, dx \le \int_\Omega v^2 + |\nabla v|^2 \, dx = \langle v, v\rangle_{H_0^1(\Omega)} = \|v\|_{H_0^1(\Omega)}^2 and thus obtain |a(u,v)| \le \|v\|_{H_0^1(\Omega)}\|u\|_{H_0^1(\Omega)}. This is the desired inequality with constant one.

(10)   \begin{equation*} \begin{split} |a(u,v)| &\le \int_0^1 |v'(x)||u'(x)| \, dx \\ &\le \left(\int_0^1 |v'(x)|^2 \, dx\right)^{1/2} \left(\int_0^1 |u'(x)|^2 \, dx\right)^{1/2} \\ &\le \|v\|_{H_0^1((0,1))}\|u\|_{H_0^1((0,1))}. \end{split} \end{equation*}

Let’s go line-by-line. First, we note that the absolute value of integral is less than the integral of absolute value. Next, we apply the Cauchy-Schwarz inequality for integrals. Finally, we note that \int_0^1|v'(x)|^2 \, dx \le \int_0^1|v(x)|^2 + |v'(x)|^2 \, dx = \langle v, v\rangle_{H_0^1((0,1))} = \|v\|_{H_0^1((0,1))}^2. This establishes Property (1) with constant C = 1. As we already see one third of the way into verifying the hypotheses of Lax-Milgram, establishing these inequalities can require several steps. Ultimately, however, strong knowledge of just a core few inequalities (e.g. Cauchy-Schwarz) may be all that’s needed.

Proving coercivity (Property (2)) actually requires a very special inequality, Poincaré’s inequality.17In higher dimensions, Poincaré’s inequality takes the form \int_\Omega \|\nabla u\|^2 \, dx \ge k\int_\Omega u^2 \, dx for a constant k depending only on the domain \Omega. In it’s simplest incarnation, the inequality states that there exists a constant k such that, for all functions u \in H_0^1((0,1)),18A simple proof of Poincaré’s inequality for continuously differentiable u with u(0) = u(1) = 0 goes as follows. Note that, by the fundamental theorem of calculus, u(x) = u(0) + \int_0^x u'(y) \, dy = \int_0^x u'(y) \, dy. Applying the Cauchy-Schwarz inequality for integrals gives |u(x)|\le \left(\int_0^x 1^2 \, dy\right)^{1/2}\left(\int_0^x |u'(y)|^2 \, dy\right)^{1/2} \le \sqrt{x} \left(\int_0^x |u'(y)|^2 \, dy\right)^{1/2} \le \left(\int_0^1 |u'(y)|^2 \, dy\right)^{1/2} since 0\lex\le 1. Thus |u(x)|^2 \le \int_0^1 |u'(y)|^2 \, dy for all x \in [0,1] integrating over x gives \int_0^1\int_0^1|u'(y)|^2 \, dy\, dx = \int_0^1|u'(x)|^2 \, dx \ge \int_0^1|u(x)|^2 \, dx. This proves Poincaré’s inequality with the constant k = 1.

(11)   \begin{equation*} \int_0^1|u'(x)|^2 \, dx \ge k\int_0^1|u(x)|^2 \, dx. \end{equation*}

With this inequality in tow, property (2) follows after another lengthy string of inequalities:19The same estimate holds in higher dimensions, with the appropriate generalization of Poincaré’s inequality.

(12)   \begin{equation*} \begin{split} a(u,u) &= \int_0^1 |u'(x)|^2 \, dx \\ &= \frac{1}{2} \int_0^1 |u'(x)|^2 \, dx + \frac{1}{2} \int_0^1 |u'(x)|^2 \, dx \\ &\ge \frac{1}{2} \int_0^1 |u'(x)|^2 \, dx + \frac{k}{2} \int_0^1 |u(x)|^2 \, dx \\ &\ge \min \left(\frac{1}{2},\frac{k}{2}\right) \left( \int_0^1 |u'(x)|^2 \, dx + \int_0^1|u(x)|^2 \, dx\right) \\ &= \min \left(\frac{1}{2},\frac{k}{2}\right) \|u\|_{H_0^1((0,1))}^2. \end{split} \end{equation*}

For Property (3) to hold, the function f must be square-integrable. With this hypothesis, Property (3) is much easier than Properties (1-2) and we leave it as an exercise for the interested reader (or to a footnote20The proof is similar in one dimension or higher dimensions, so we state it for arbitrary domain \Omega for brevity. By Cauchy-Schwarz, we have that \ell(v) = \int_\Omega vf \, dx \le \left( \int_{\Omega} f^2 \, dx \right){1/2}\left( \int_{\Omega} v^2 \, dx \right){1/2} \le \left( \int_{\Omega} f^2 \, dx \right) \|v\|_{H_0^1(\Omega)}. for the uninterested reader).

This may seem like a lot of work, but the result we have achieved is stunning. We have proven (modulo a lot of omitted details) that the Poisson equation -u'' = f has a unique weak solution as long as f is square-integrable!21And in the footnotes, we have upgraded this proof to existence of a unique weak solution to the Poisson equation -\Delta u = f on a domain \Omega. What is remarkable about this proof is that it uses the Lax-Milgram theorem and some inequalities alone: no specialized knowledge about the physics underlying the Poisson equation were necessary. Going through the details of Lax-Milgram has been a somewhat lengthy affair for an introductory post, but hopefully this discussion has illuminated the power of functional analytic tools (like Lax-Milgram) in studying differential equations. Now, with a healthy dose of theory in hand, let us return to Galerkin approximation.

General Galerkin Approximation

With our general theory set up, Galerkin approximation for general variational problem is the same as it was for a system of linear equations. First, we pick an approximation space \mathcal{X} which is a subspace of \mathcal{V}. We then have the Galerkin variational problem

(13)   \begin{equation*} \mbox{Find $\hat{u} \in \mathcal{X}$ such that } a(\hat{u},v) = \ell(v) \mbox{ for every } v \in \mathcal{X}. \end{equation*}

Provided a and \ell satisfy the conditions of the Lax-Milgram theorem, there is a unique solution \hat{u} to the problem Eq. (13). Moreover, the special property of Galerkin approximation holds: the error u-\hat{u} is a-orthogonal to the subspace \mathcal{X}. Consequently, \hat{u} is te best approximate solution to the variational problem Eq. (9) in the a-norm. To see the a-orthogonality, we have that, for any v \in \mathcal{X},

(14)   \begin{equation*} \langle u-\hat{u}, v\rangle_a = a(u-\hat{u},v) = a(u,v) - a(\hat{u},v) = \ell(v) - \ell(v) = 0, \end{equation*}

where we use the variational equation Eq. (9) for a(u,v) = \ell(v) and Eq. (13) for a(\hat{u},v) = \ell(v). Note the similarities with Eq. (3). Thus, using the Pythagorean theorem for the a-norm, for any other approximation solution w \in \mathcal{X}, we have22Compare Eq. (4).

(15)   \begin{equation*} \|u - w\|^2_a = \|(u - \hat{u}) + (\hat{u}-w)\|_a^2 = \|u-\hat{u}\|_a^2 + \|\hat{u} - w\|_a^2 \ge \|u - \hat{u} \|_a^2. \end{equation*}

Put simply, \hat{u} is the best approximation to u in the a-norm.23Using the fact the norms \|\cdot\|_{\mathcal{V}} and \|\cdot\|_a are equivalent in virtue of Properties (1-2), one can also show that \hat{u} is within a constant factor C/c of the best approximation in the norm \|\cdot\|_{\mathcal{V}. This is known as Céa’s Lemma.

Galerkin approximation is powerful because it allows us to approximate an infinite-dimensional problem by a finite-dimensional one. If we let \phi_1,\ldots,\phi_M be a basis for the space \mathcal{X}, then the approximate solution \hat{u} can be represented as \hat{u} = x_1 \phi_1 + \cdots + x_M \phi_M. Since \phi_1,\ldots,\phi_N form a basis of \mathcal{X}, to check that the Galerkin variational problem Eq. (13) holds for all v \in \mathcal{X} it is sufficient to check that it holds for v = \phi_1, v=\phi_2,\ldots,v=\phi_M.24For an arbitrary v\in \mathcal{X} can be written as v = y_1 \phi_1 + \cdots y_M\phi_M, so a(\hat{u},v) = a\left(\hat{u}, \sum_{j=1}^M y_j \phi_j \right) = \sum_{j=1}^M y_j a(\hat{u},\phi_j) = \sum_{j=1}^M y_j \ell(\phi_j) = \ell\left(\sum_{j=1}^M y_j \phi_j\right) = \ell(v). Thus, plugging in \hat{u} = \sum_{j=1}^M x_j \phi_j and v = \phi_i into Eq. (13), we get (using bilinearity of a)

(16)   \begin{equation*} a(\hat{u},v) = a\left(\sum_{j=1}^M x_j \phi_j, \phi_i\right) = \sum_{j=1}^M a(\phi_j,\phi_i) x_j = \ell(v) = \ell(\phi_i), \quad i =1,2,\ldots,M. \end{equation*}

If we define a_{ij} = a(\phi_j,\phi_i) and b_i = \ell(\phi_i), then this gives us a matrix equation Ax = b for the unknowns x_1,\ldots,x_M parametrizing \hat{u}. Thus, we can compute our Galerkin approximation by solving a linear system of equations.

We’ve covered a lot of ground so let’s summarize. Galerkin approximation is a technique which allows us to approximately solve a large- or infinite-dimensional problem by searching for an approximate solution in a smaller finite-dimensional space \mathcal{X} of our choosing. This Galerkin approximation is the best approximate solution to our original problem in the a-norm. By choosing a basis \phi_1,\ldots,\phi_M for our approximation space \mathcal{X}, we reduce the problem of computing a Galerkin approximation to a linear system of equations.

Design of a Galerkin approximation scheme for a variational problem thus boils down to choosing the approximation space \mathcal{X} and a basis \phi_1,\ldots,\phi_M. Picking \mathcal{X} to be a space of piecewise polynomial functions (splines) gives the finite element method. Picking \mathcal{X} to be a space spanned by a collection of trigonometric functions gives a Fourier spectral method. One can use a space spanned by wavelets as well. The Galerkin framework is extremely general: give it a subspace \mathcal{X} and it will give you a linear system of equations to solve to give you the best approximate solution in \mathcal{X}.

Two design considerations factor into the choice of space \mathcal{X} and basis \phi_1,\ldots,\phi_M. First, one wants to pick a space \mathcal{X}, where the solution u almost lies in. This is the rationale behind spectral methods. Smooth functions are very well-approximated by short truncated Fourier expansions, so, if the solution u is smooth, spectral methods will converge very quickly. Finite element methods, which often use low-order piecewise polynomial functions, converge much more slowly to a smooth u. The second design consideration one wants to consider is the ease of solving the system Ax = b resulting from the Galerkin approximation. If the basis function \phi_1,\ldots,\phi_M are local in the sense that most pairs of basis functions \phi_i and \phi_j aren’t nonzero at the same point x (more formally, \phi_i and \phi_j have disjoint supports for most i and j), the system Ax = b will be sparse and thus usually much easier to solve. Traditional spectral methods usually result in a harder-to-solve dense linear systems of equations.25There are clever ways of making spectral methods which lead to sparse matrices. Conversely, if one uses high-order piecewise polynomials in a finite element approximation, one can get convergence properties similar to a spectral method. These are called spectral element methods. It should be noted that both spectral and finite element methods lead to ill-conditioned matrices A, making integral equation-based approaches preferable if one needs high-accuracy.26For example, only one researcher using a finite-element method was able to meet Trefethen’s challenge to solve the Poisson equation to eight digits of accuracy on an L-shaped domain (see Section 6 of this paper). Getting that solution required using a finite-element method of order 15! Integral equations, themselves, are often solved using Galerkin approximation, leading to so-called boundary element methods.

Upshot: Galerkin approximation is a powerful and extremely flexible methodology for approximately solving large- or infinite-dimensional problems by finding the best approximate solution in a smaller finite-dimensional subspace. To use a Galerkin approximation, one must convert their problem to a variational formulation and pick a basis for the approximation space. After doing this, computing the Galerkin approximation reduces down to solving a system of linear equations with dimension equal to the dimension of the approximation space.

Big Ideas in Applied Math: Sparse Matrices

Sparse matrices are an indispensable tool for anyone in computational science. I expect there are a very large number of simulation programs written in scientific research across the country which could be faster by ten to a hundred fold at least just by using sparse matrices! In this post, we’ll give a brief overview what a sparse matrix is and how we can use them to solve problems fast.

A matrix is sparse if most of its entries are zero. There is no precise threshold for what “most” means; Kolda suggests that a matrix have at least 90% of its entries be zero for it to be considered sparse. The number of nonzero entries in a sparse matrix A is denoted by \operatorname{nnz}(A). A matrix that is not sparse is said to be dense.

Sparse matrices are truly everywhere. They occur in finite difference, finite element, and finite volume discretizations of partial differential equations. They occur in power systems. They occur in signal processing. They occur in social networks. They occur in intermediate stages in computations with dense rank-structured matrices. They occur in data analysis (along with their higher-order tensor cousins).

Why are sparse matrices so common? In a word, locality. If the ijth entry a_{ij} of a matrix A is nonzero, then this means that row i and column j are related in some way to each other according to the the matrix A. In many situations, a “thing” is only related to a handful of other “things”; in heat diffusion, for example, the temperature at a point may only depend on the temperatures of nearby points. Thus, if such a locality assumption holds, every row will only have a small number of nonzero entries and the matrix overall will be sparse.

Storing and Multiplying Sparse Matrices

A sparse matrix can be stored efficiently by only storing its nonzero entries, along with the row and column in which these entries occur. By doing this, a sparse matrix can be stored in \mathcal{O}(\operatorname{nnz}(A)) space rather than the standard \mathcal{O}(N^2) for an N\times N matrix A.1Here, \mathcal{O}(\cdot) refers to big-O notation. For the efficiency of many algorithms, it will be very beneficial to store the entries row-by-row or column-by-column using compressed sparse row and column (CSR and CSC) formats; most established scientific programming software environments support sparse matrices stored in one or both of these formats. For efficiency, it is best to enumerate all of the nonzero entries for the entire sparse matrix and then form the sparse matrix using a compressed format all at once. Adding additional entries one at a time to a sparse matrix in a compressed format requires reshuffling the entire data structure for each new nonzero entry.

There exist straightforward algorithms to multiply a sparse matrix A stored in a compressed format with a vector x to compute the product b = Ax. Initialize the vector b to zero and iterate over the nonzero entries a_{ij} of A, each time adding a_{ij}x_j to b_i. It is easy to see this algorithm runs in \mathcal{O}(\operatorname{nnz}(A)) time.2More precisely, this algorithm takes \mathcal{O}(\operatorname{nnz}(A) + N) time since it requires \mathcal{O}(N) operations to initialize the vector b even if A has no nonzero entries. We shall ignore this subtlety in the remainder of this article and assume that \operatorname{nnz}(A) = \Omega(N), which is true of most sparse matrices occurring in practice The fact that sparse matrix-vector products can be computed quickly makes so-called Krylov subspace iterative methods popular for solving linear algebraic problems involving sparse matrices, as these techniques only interact with the matrix A by computing matrix-vector products x \mapsto Ax (or matrix-tranpose-vector products y \mapsto A^\top y).

Lest the reader think that every operation with a sparse matrix is necessarily fast, the product of two sparse matrices A and B need not be sparse and the time complexity need not be \mathcal{O}(\operatorname{nnz}(A) + \operatorname{nnz}(B)). A counterexample is

(1)   \begin{equation*} A = B^\top = \begin{bmatrix} a_1 & 0 & \cdots & 0 \\ a_2 & 0 & \cdots & 0 \\ \vdots & \vdots & \ddots & 0 \\ a_n & 0 & \cdots & 0 \end{bmatrix} \in \mathbb{R}^n \end{equation*}

for a_1,\ldots,a_n \ne 0. We have that \operatorname{nnz}(A) = \operatorname{nnz}(B) = N but

(2)   \begin{equation*} AB = \begin{bmatrix} a_1^2 & a_1 a_2 & \cdots & a_1a_N \\ a_2a_1 & a_2^2 & \cdots & a_2a_N \\ \vdots & \vdots & \ddots & \vdots \\ a_Na_1 & a_Na_2 & \cdots & a_N^2 \end{bmatrix} \end{equation*}

which has \operatorname{nnz}(AB) = N^2 nonzero elements and requires \mathcal{O}(N^2) operations to compute. However, if one does the multiplication in the other order, one has \operatorname{nnz}(BA) = 1 and the multiplication can be done in \mathcal{O}(N) operations. Thus, some sparse matrices can be multiplied fast and others can’t. This phenomena of different speeds for different sparse matrices is very much also true for solving sparse linear systems of equations.

Solving Sparse Linear Systems

The question of how to solve a sparse system of linear equations Ax = b where A is sparse is a very deep problems with fascinating connections to graph theory. For this article, we shall concern ourselves with so-called sparse direct methods, which solve Ax = b by means of computing a factorization of the sparse matrix A. These methods produce an exact solution to the system Ax = b if all computations are performed exactly and are generally considered more robust than inexact and iterative methods. As we shall see, there are fundamental limits on the speed of certain sparse direct methods, which make iterative methods very appealing for some problems.

Note from the outset that our presentation will be on illustrating the big ideas rather than presenting the careful step-by-step details needed to actually code a sparse direct method yourself. An excellent reference for the latter is Tim Davis’ wonderful book Direct Methods for Sparse Linear Systems.

Let us begin by reviewing how LU factorization works for general matrices. Suppose that the (1,1) entry of A is nonzero. Then, LU factorization proceeds by subtracting scaled multiples of the first row from the other rows to zero out the first column. If one keeps track of these scaling, then one can write this process as a matrix factorization, which we may demonstrate pictorially as

(3)   \begin{equation*} \underbrace{\begin{bmatrix} * & * & \cdots & * \\ * & * & \cdots & * \\ \vdots & \vdots & \ddots & \vdots \\ * & * & \cdots & *\end{bmatrix}}_{=A} = \begin{bmatrix} 1 & & &\\ * & 1 & & \\ \vdots & & \ddots & \\ * & & & 1\end{bmatrix} \begin{bmatrix} * & * & \cdots & * \\ & * & \cdots & * \\ & \vdots & \ddots & \vdots \\ & * & \cdots & *\end{bmatrix}. \end{equation*}

Here, *‘s denote nonzero entries and blanks denote zero entries. We then repeat the process on the (N-1)\times (N-1) submatrix in the bottom right (the so-called Schur complement). Continuing in this way, we eventually end up with a complete LU factorization

(4)   \begin{equation*} \underbrace{\begin{bmatrix} * & * & \cdots & * \\ * & * & \cdots & * \\ \vdots & \vdots & \ddots & \vdots \\ * & * & \cdots & *\end{bmatrix}}_{=A} = \underbrace{\begin{bmatrix} 1 & & &\\ * & 1 & & \\ \vdots & \vdots & \ddots & \\ * &* &\cdots & 1\end{bmatrix}}_{=L} \underbrace{\begin{bmatrix} * & * & \cdots & * \\ & * & \cdots & * \\ &  & \ddots & \vdots \\ & & & *\end{bmatrix}}_{=U}. \end{equation*}

In the case that A is symmetric positive definite (SPD), one has that U = DL^\top for D a diagonal matrix consisting of the entries on U. This factorization A = LDL^\top is a Cholesky factorization of A.3Often, the Cholesky factorization is written as A = TT^\top for T=LD^{1/2} or A = RR^\top for R = D^{-1/2}U. These different forms all contain the same basic information, so we shall stick with the LDL^\top formulation in this post. For general non-SPD matrices, one needs to incorporate partial pivoting for Gaussian elimination to produce accurate results.4See the excellent monograph Accuracy and Stability of Numerical Algorithms for a comprehensive treatment of this topic.

Let’s try the same procedure for a sparse matrix. Consider a sparse matrix with the following sparsity pattern:

(5)   \begin{equation*} A = \begin{bmatrix} * & * & * & & * \\ * & * & & * & \\ * & & * & & \\ & * & & * & \\ * & & &  & * \end{bmatrix}. \end{equation*}

When we eliminate the (1,1) entry, we get the following factorization:

(6)   \begin{equation*} \underbrace{\begin{bmatrix} * & * & * & & * \\ * & * & & * & \\ * & & * & & \\ & * & & * & \\ * & & &  & * \end{bmatrix}}_{=A} = \begin{bmatrix} 1 & & & & \\ * & 1& & &  \\ * & & 1 & & \\  & & & 1 & \\ * & & &  & 1 \end{bmatrix} \begin{bmatrix} * & * & * & & * \\ & * & \bullet & * & \bullet \\ & \bullet & * & & \bullet \\ & * & & * & \\ & \bullet & \bullet &  & * \end{bmatrix} \end{equation*}

Note that the Schur complement has new additional nonzero entries (marked with a \bullet) not in the original sparse matrix A. The Schur complement of A is denser than A was; there are new fill-in entries. The worst-case scenario for fill-in is the arrowhead matrix:

(7)   \begin{equation*} \underbrace{\begin{bmatrix} * & * & * & \cdots & * \\ * & * & & & \\ * & & * & & \\ \vdots & & & \ddots & \\ * & & &  & * \end{bmatrix}}_{=A} = \begin{bmatrix} 1 & & & & \\ * & 1& & &  \\ * & & 1 & & \\  \vdots & & & \ddots & \\ * & & & & 1 \end{bmatrix} \begin{bmatrix} * & * & * & * & * \\ & * & \bullet & \cdots & \bullet \\ & \bullet & * & \cdots & \bullet \\ & \vdots & \vdots & \ddots & \vdots \\ & \bullet & \bullet &  \cdots & * \end{bmatrix} \end{equation*}

After one step of Gaussian elimination, we went from a matrix with \mathcal{O}(N) nonzeros to a fully dense Schur complement! However, the arrowhead matrix also demonstrates a promising strategy. Simply construct a permutation matrix which reorders the first entry to be the last5For instance, the circular shift permutation P = \begin{bmatrix} & 1 & & & \\ & & 1 & & \\ & & & \ddots & \\ & & & & 1 \\ 1\end{bmatrix}. and then perform Gaussian elimination on the symmetrically permuted matrix PAP^\top instead. In fact, the entire LU factorization can be computed without fill-in:

(8)   \begin{equation*} \underbrace{\begin{bmatrix} * & & & & * \\ & * & & & * \\ & & * & & * \\  & & & \ddots  & \vdots \\ * & * & * & \cdots & * \end{bmatrix}}_{=PAP^\top} = \begin{bmatrix} 1 & & & & \\ & 1& & &  \\ & & 1 & & \\  & & & \ddots & \\ * & * & * & \cdots & 1 \end{bmatrix} \begin{bmatrix} * & &  & & * \\ & * &  & & * \\ & & * &  & * \\ &  & & \ddots & \vdots \\ &  &  &  & * \end{bmatrix}. \end{equation*}

This example shows the tremendous importance of reordering of the rows and columns when computing a sparse LU factorization.

The Best Reordering

As mentioned above, when computing an LU factorization of a dense matrix, one generally has to reorder the rows (and/or columns) of the matrix to compute the solution accurately. Thus, when computing the LU factorization of a sparse matrix, one has to balance the need to reorder for accuracy and to reorder to reduce fill-in. For these reasons, for the remainder of this post, we shall focus on computing Cholesky factorizations of SPD sparse matrices, where reordering for accuracy is not necessary.6For ill-conditioned and positive semi-definite matrices, one may want to reorder a Cholesky factorization so the result is rank-revealing. This review article has a good discussion of pivoted Cholesky factorization. For most applications, one can successfully compute an accurate Cholesky factorization without any specific accuracy-focused reordering strategy. Since we want the matrix to remain SPD, we must restrict ourselves to symmetric reordering strategies where A is reordered to PAP^\top where P is a permutation matrix.

Our question is deceptively simple: what reordering produces the least fill-in? In matrix language, what permutation P minimizes \operatorname{nnz}(L) where LDL^\top = PAP^\top is the Cholesky factorization of PAP^\top?

Note that, assuming no entries in the Gaussian elimination process exactly cancel, then the Cholesky factorization depends only on the sparsity pattern of A (the locations of the zeros and nonzeros) and not on the actual numeric values of A‘s entries. This sparsity structure is naturally represented by a graph \mathcal{G}(A) whose nodes are the indices \{1,\ldots,N\} with an edge between i \ne j if, and only if, a_{ij} \ne 0.

Now let’s see what happens when we do Gaussian elimination from a graph point-of-view. When we eliminate the (1,1) entry from matrix, this results in all nodes of the graph adjacent to 1 becoming connected to each other.7Graph theoretically, we add a clique containing the nodes adjacent to 1

This shows why the arrowhead example is so bad. By eliminating the a vertex connected to every node in the graph, the eliminated graph becomes a complete graph.

Reordering the matrix corresponds to choosing in what order the vertices of the graph are eliminated. Choosing the elimination order is then a puzzle game; eliminate all the vertices of the graph in the order that produces the fewest fill-in edges (shown red).8This “graph game” formulation of sparse Gaussian elimination is based on how I learned it from John Gilbert. His slides are an excellent resource for all things sparse matrices!

Finding the best elimination ordering for a sparse matrix (graph) is a good news/bad news situation. For the good news, many graphs possess a perfect elimination ordering, in which no fill-in is produced at all. There is a simple algorithm to determine whether a graph (sparse matrix) possesses a perfect elimination ordering and if so, what it is.9The algorithm is basically just a breadth-first search. Some important classes of graphs can be eliminated perfectly (for instance, trees). More generally, the class of all graphs which can be eliminated perfectly is precisely the set of chordal graphs, which are well-studied in graph theory.

Now for the bad news. The problem of finding the best elimination ordering (with the least fill-in) for a graph is NP-Hard. This means, assuming the widely conjectured result that {\rm P} \ne {\rm NP}, that finding the best elimination ordering would be a hard computational problem than the worst-case \mathcal{O}(N^3) complexity for doing Gaussian elimination in any ordering! One should not be too pessimistic about this result, however, since (assuming {\rm P} \ne {\rm NP}) all it says is that there exists no polynomial time algorithm guaranteed to produce the absolutely best possible elimination ordering when presented with any graph (sparse matrix). If one is willing to give up on any one of the bolded statements, further progress may be possible. For instance, there exists several good heuristics, which find reasonably good elimination orderings for graphs (sparse matrices) in linear \mathcal{O}(\operatorname{nnz}(A)) (or nearly linear) time.

Can Sparse Matrices be Eliminated in Linear Time?

Let us think about the best reordering question in a different way. So far, we have asked the question “Can we find the best ordering for a sparse matrix?” But another question is equally important: “How efficiently can we solve a sparse matrix, even with the best possible ordering?”

One might optimistically hope that every sparse matrix possesses an elimination ordering such that its Cholesky factorization can be computed in linear time (in the number of nonzeros), meaning that the amount of time needed to solve Ax = b is proportional to the amount of data needed to store the sparse matrix A.

When one tests a proposition like this, one should consider the extreme cases. If the matrix A is dense, then it requires \mathcal{O}(N^3) operations to do Gaussian elimination,10This is neglecting the possibility of acceleration by Strassentype fast matrix multiplication algorithms. For simplicity, we shall ignore these fast multiplication techniques for the remainder of this post and assume dense Ax = b can be solved no faster than \mathcal{O}(N^3) operations. but A only has \operatorname{nnz}(A) = N^2 nonzero entries. Thus, our proposition cannot hold in unmodified form.

An even more concerning counterexample is given by a matrix A whose graph \mathcal{G}(A) is a \mathcal{O}(\sqrt{N}) \times \mathcal{O}(\sqrt{N}) 2D grid graph.

Sparse matrices with this sparsity pattern (or related ones) appear all the time in discretized partial differential equations in two dimensions. Moreover, they are truly sparse, only having \operatorname{nnz}(A) = \mathcal{O}(N) nonzero entries. Unforunately, no linear time elimination ordering exists. We have the following theorem:

Theorem: For any elimination ordering for a sparse matrix A with \mathcal{G}(A) being a \sqrt{N}\times \sqrt{N} 2D grid graph, in any elimination ordering, the Cholesky factorization PAP^\top = LDL^\top requires \Omega(N^{3/2}) operations and satisfies \operatorname{nnz}(L)= \Omega(N\log N).11Big-Omega notation is a cousin of Big-O notation. One should read f(N) = \Omega(g(N)) as “f(N) is no less than a constant multiple of g(N), asymptotically”.

The proof is contained in Theorem 10 and 11 (and the ensuing paragraph) of classic paper by Lipton, Rose, and Tarjan. Natural generalizations to d-dimensional grid graphs give bounds of \Omega(N^{3(d-1)/d}) time and \operatorname{nnz}(L)=\Omega(N^{2(d-1)/d}) for d > 2. In particular, for 2D finite difference and finite element discretizations, sparse Cholesky factorization takes \Omega(N^{3/2}) operations and produces a Cholesky factor with \operatorname{nnz}(L)= \Omega(N\log N) in the best possible ordering. In 3D, sparse Cholesky factorization takes \Omega(N^{2}) operations and produces a Cholesky factor with \operatorname{nnz}(L)= \Omega(N^{4/3}) in the best possible ordering.

Fortunately, at least these complexity bounds are attainable: there is an ordering which produces a sparse Cholesky factorization with PAP^\top = LDL^\top requiring \Theta(N^{3/2}) operations and with \operatorname{nnz}(L)= \Theta(N\log N) nonzero entries in the Cholesky factor.12Big-Theta notation just means f(N) = \Theta(g(N)) if f(N) = \mathcal{O}(g(N)) and f(N) = \Omega(g(N)) One such asymptotically optimal ordering is the nested dissection ordering, one of the heuristics alluded to in the previous section. The nested dissection ordering proceeds as follows:

  1. Find a separator S consisting of a small number of vertices in the graph \mathcal{G}(A) such that when S is removed from \mathcal{G}(A), \mathcal{G}(A) is broken into a small number of edge-disjoint and roughly evenly sized pieces \mathcal{G}_1,\ldots,\mathcal{G}_k.13In particular, \mathcal{G}(A) is the disjoint union \mathcal{G}(A) = S \cup \mathcal{G}_1 \cup \cdots \mathcal{G}_k and there are no edges between \mathcal{G}_i and \mathcal{G}_j for i\ne j.
  2. Recursively use nested dissection to eliminate each component \mathcal{G}_1,\ldots, \mathcal{G}_k individually.
  3. Eliminate S in any order.

For example, for the 2D grid graph, if we choose S to be a cross through the center of the 2D grid graph, we have a separator of size |S| = \Theta(\sqrt{N}) dividing the graph into 4 roughly \sqrt{N}/2\times \sqrt{N}/2 pieces.

Let us give a brief analysis of this nested dissection ordering. First, consider the sparsity of the Cholesky factor \operatorname{nnz}(L). Let S(N) denote the number of nonzeros in L for an elimination of the \sqrt{N}\times \sqrt{N} 2D grid graph using the nested dissection ordering. Then step 2 of nested dissection requires us to recursively eliminate four \sqrt{N/4} \times \sqrt{N/4} 2D grid graphs. After doing this, for step 3, all of the vertices of the separator might be connected to each other, so the separator graph will potentially have as many as \mathcal{O}(|S|^2) = \mathcal{O}(N) edges, which result in nonzero entries in L. Thus, combining the fill-in from both steps, we get

(9)   \begin{equation*} S(N) = 4S\left(\frac{N}{4}\right) + \mathcal{O}(N). \end{equation*}

Solving this recurrence using the master theorem for recurrences gives \operatorname{nnz}(L) = S(N) = \mathcal{O}(N\log N). If one instead wants the time T(N) required to compute the Cholesky factorization, note that for step 3, in the worst case, all of the vertices of the separator might be connected to each other, leading to a \sqrt{N}\times \sqrt{N} dense matrix. Since a \sqrt{N}\times \sqrt{N} matrix requires \mathcal{O}((\sqrt{N})^3) = \mathcal{O}(N^{3/2}), we get the recurrence

(10)   \begin{equation*} T(N) = 4T\left(\frac{N}{4}\right) + \mathcal{O}(N^{3/2}), \end{equation*}

which solves to T(N) = \mathcal{O}(N^{3/2}).

Conclusions

As we’ve seen, sparse direct methods (as exemplified here by sparse Cholesky) possess fundamental scalability challenges for solving large problems. For the important class of 2D and 3D discretized partial differential equations, the time to solve Ax = b scales like \Theta(N^{3/2}) and \Theta(N^2), respectively. For truly large-scale problems, these limitations may be prohibitive for using such methods.

This really is the beginning of the story, not the end for sparse matrices however. The scalability challenges for classical sparse direct methods has spawned many exciting different approaches, each of which combats the scalability challenges of sparse direct methods for a different class of sparse matrices in a different way.

Upshot: Sparse matrices occur everywhere in applied mathematics, and many operations on them can be done very fast. However, the speed of computing an LU factorization of a sparse matrix depends significantly on the arrangement of its nonzero entries. Many sparse matrices can be factored quickly, but some require significant time to factor in any reordering.

Big Ideas in Applied Math: Smoothness and Degree of Approximation

At its core, computational mathematics is about representing the infinite by the finite. Even attempting to store a single arbitrary real number requires an infinite amount of memory to store its infinitely many potentially nonrepeating digits.1It is a well-known result that a real number has a repeating decimal expansion if, and only if, it is rational. When dealing with problems in calculus, however, the problem is even more severe as we want to compute with functions on the real line. In effect, a function is an uncountably long list of real numbers, one for each value of the function’s domain. We certainly cannot store an infinite list of numbers with infinitely many digits on a finite computer!

Thus, our only hope to compute with functions is to devise some sort of finite representation for them. The most natural representation is to describe function by a list of real numbers (and then store approximations of these numbers on our computer). For instance, we may approximate the function by a polynomial a_1 + a_2 x + \cdots + a_{M} x^{M-1} of degree M-1 and then store the M coefficients a_1,\ldots,a_{M}. From this list of numbers, we then can reconstitute an approximate version of the function. For instance, in our polynomial example, our approximate version of the function is just a_1 + a_2 x + \cdots + a_{M} x^{M-1}. Naturally, there is a tradeoff between the length of our list of numbers and how accurate our approximation is. This post is about that tradeoff.

The big picture idea is that the “smoother” a function is, the easier it will be to approximate it with a small number of parameters. Informally, we have the following rule of thumb: if a function f on a one-dimensional domain possesses s nice derivatives, then f can be approximated by a M-parameter approximation with error decaying at least as fast as 1/M^s. This basic result appears in many variants in approximation theory with different precise definitions of the term “s nice derivatives” and “M-parameter approximation”. Let us work out the details of the approximation problem in one concrete setting using Fourier series.

Approximation by Fourier Series

Consider a complex-valued and 2\pi-period function f defined on the real line. (Note that, by a standard transformation, there are close connections between approximation of 2\pi-periodic functions on the whole real line and functions defined on a compact interval [a,b].2Specifically, suppose that h is a function defined on [a,b]. Then define a function g on [-1,1] by g(x) = h((b-a)x + (b+a)). This linear rescaling of the domain is very simple and easy to understand. Now define a 2\pi-periodic function f on \mathbb{R} by f(\theta) = g(\cos(\theta)). There are very close connections between approximation of f and g. For example, Fourier cosine expansions of f are equivalent to Chebyshev polynomial expansions of g. For more on this subject, Trefethen’s Approximation Theory and Approximation Practice is an excellent reference.) If f is square-integrable, then f possesses a Fourier expansion

(1)   \begin{equation*} f(\theta) = \sum_{k=-\infty}^\infty} \hat{f}_k e^{{\rm i} k\theta}, \quad \hat{f}_k = \frac{1}{2\pi} \int_{-\pi}^\pi f(\theta) e^{-{\rm i}k\theta} \, d\theta. \end{equation*}

The infinite series converges in the L^2 sense, meaning that \lim_{m\to\infty} \|f - f_m\|_{L^2} = 0, where f_m is the truncated Fourier series f_M(\theta) = \sum_{k=-m}^m \hat{f}_k e^{{\rm i} k\theta} and \|\cdot\| is L^2 norm \|f\|_2 = \sqrt{\int_{-\pi}^\pi |f(\theta)|^2 \, d\theta.3One may show with considerable analysis that Fourier series converges in others senses, for example almost everywhere convergence. We also have the Plancherel theorem, which states that

(2)   \begin{equation*} \|f\|_{L^2}^2 = \int_{-\pi}^\pi |f(\theta)|^2\, d\theta = 2\pi \sum_{k=-\infty}^{\infty} |\hat{f}_k|^2. \end{equation*}

Note that the convergence of the Fourier series is fundamentally a statement about approximation. The fact that the Fourier series converges means that the truncated Fourier series f_m of 2m+1 terms acts as an arbitrarily close approximate of f (measured with the L^2 norm). However, the number M=2m+1 of terms we need to store might be quite large if a large value of M is needed for \|f - f_m\|_{L^2} to be small. However, as we shall soon see, if the function f is “smooth”, \|f - f_M\|_{L^2} will be small for even moderate values of M.

Smoothness

Often, in analysis, we refer to a function as smooth when it possesses derivatives of all orders. In one dimension, this means that the jth derivative f^{(j)} exists for every integer j \ge 0. In this post, we shall speak of smoothness as a more graded notion: loosely, a function g is smoother than a function f if g possesses more derivatives than f or the magnitude of g‘s derivatives are smaller. This conception of smoothness accords more with the plain-English definition of smoothness: the graph of a very mildly varying function g with a discontinuity in its 33rd derivative looks much smoother to the human eye than the graph of a highly oscillatory and jagged function f that nonetheless possesses derivatives of all orders.4One might refer to the precise concept of possessing derivatives of all orders as C^\infty smoothness.

For a function defined in terms of a Fourier series, it is natural to compute its derivative by formally differentiating the Fourier series term-by-term:

(3)   \begin{equation*} f'(\theta) = \sum_{k=-\infty}^\infty ({\rm i}k) \hat{f}_k e^{{\rm i} k\theta}. \end{equation*}

This formal Fourier series converges if, and only if, the L^2 norm of this putative derivative f', as computed with the Plancherel theorem, is finite: \|f'\|_{L^2}^2 = 2\pi \sum_{k=-\infty}^\infty |k|^2 |\hat{f}_k|^2 < \infty. This “derivative” f' may not be a derivative of f in the classical sense. For instance, using this definition, the absolute value function g(x) = |x| possesses a derivative g'(x) = +1 for x > 0 and g'(x) = -1 for x < 0. For the derivative of f to exist in the sense of Eq. (3), f need not be differentiable at every point, but it must define a square-integrable functions at the points where it is differentiable. We shall call the derivative f' as given by Eq. (3) to be a weak derivative of f.

If a function f has s square-integrable weak derivatives f^{(j)}(\theta) = \sum_{k=-\infty}^\infty ({\rm i}k)^j \hat{f}_k e^{{\rm i}k\theta} for 1\le j \le s, then we say that f belongs to the Sobolev space H^s. The Sobolev space H^s is equipped with the norm

(4)   \begin{equation*} \|f\|_{H^s} = \sqrt{\sum_{j=0}^s \|f^{(j)}\|_{L^2}^2}. \end{equation*}

The Sobolev norm \|\cdot\|_{H^s} is a quantitative measure of qualitative smoothness. The smaller the Sobolev norm H^s of f, the smaller the derivatives of f are. As we will see, we can use this to bound the approximation error.

Smoothness and Degree of Approximation

If f is approximated by f_m, the error of approximation is given by

(5)   \begin{equation*} f(\theta) - f_m(\theta) = \sum_{|k|>m} \hat{f}_k e^{{\rm i}k\theta}, \quad \|f - f_m\|_{L^2} = \sqrt{ \sum_{|k|>m} |\hat{f}_k|^2 }. \end{equation*}

Suppose that f \in H^s (that is, f has s square integrable derivatives). Then we may deduce the inequality

(6)   \begin{equation*} \begin{split} \|f - f_m\|_{L^2} &= \sqrt{ \sum_{|k|>m} |\hat{f}_k|^2 } \\ &= \sqrt{ \sum_{|k|>m} |k|^{-2s}|k|^{2s}|\hat{f}_k|^2 }\\ &\le m^{-s} \sqrt{ \sum_{|k|>m} |k|^{2s}|\hat{f}_k|^2 } \\ &\le m^{-s}\|f\|_{H^s}. \end{split} \end{equation*}

The first “\le” follows from the fact that |k|^{-2s} < m^{-2s} for |k|>m. This very important result is a precise instantiation of our rule of thumb from earlier: if f possesses s nice (i.e. square-integrable) derivatives, then the (L^2) approximation error for an M=2m+1-term Fourier approximation decays at least as fast as 1/M^s.

Higher Dimensions

The results for one dimension can easily be extended to consider functions f defined on d-dimensional space which are 2\pi-periodic in every argument.5e.g. f(\theta_1,\ldots, \theta_j+2\pi,\ldots,\theta_d) = f(\theta_1,\ldots,\theta_j,\ldots,\theta_d) for every 1\le j \le d For physics-based scientific simulation, we are often interested in d=2 or d=3, but for more modern problems in data science, we might be interested in very large dimensions d.

Letting \mathbb{Z}^d denote the set of all d-tuples of integers, one can show that one has the d-dimensional Fourier series

(7)   \begin{equation*} f(\theta) = \sum_{k \in \mathbb{Z}^d} f_{\hat k} e^{{\rm i}k\cdot \theta}, \quad \hat{f}_k = \frac{1}{(2\pi)^d} \int_{[-\pi,\pi]^d} f(\theta) e^{-{\rm i}k\cdot \theta} \, d\theta. \end{equation*}

Here, we denote k\cdot \theta to be the Euclidean inner product of the d-dimensional vectors k and \theta, k\cdot \theta = k_1\theta_1 + \cdots + k_d\theta_d. A natural generalization of the Plancherel theorem holds as well. Let \max |k| denote the maximum of |k_1|,\ldots,|k_d|. Then, we have the truncated Fourier series f_m = \sum_{\max |k| \le m} \hat{f}_k e^{{\rm i}k\cdot \theta}. Using the same calculations from the previous section, we deduce a very similar approximation property

(8)   \begin{equation*} \|f - f_m\|_{L^2} = \sqrt{ \sum_{\max|k|>m} |\hat{f}_k|^2 }  \le m^{-s}\|f\|_{H^s}. \end{equation*}

There’s a pretty big catch though. The approximate function f_m possesses M = (2m+1)^d terms! In order to include each of the first m Fourier modes in each of d dimensions, the number of terms M in our Fourier approximation must grow exponentially in d! In particular, the approximation error satisfies a bound

(9)   \begin{equation*} \|f - f_m\|_{L^2}  \le \left(\frac{M^{1/d}-1}{2}\right)^{-s}\|f\|_{H^s} \le C(s,d)M^{-s/d} \|f\|_{H^s}, \end{equation*}

where C(s,d) \ge 0 is a constant depending only on s and d.

This is the so-called curse of dimensionality: to approximate a function in d dimensions, we need exponentially many terms in the dimension d. In higher-dimensions, our rule of thumb needs to be modified: if a function f on a d-dimensional domain possesses s nice derivatives, then f can be approximated by a M-parameter approximation with error decaying at least as fast as 1/M^{s/d}.

The Theory of Nonlinear Widths: The Speed Limit of Approximation Theory

So far, we have shown that if one approximates a function f on a d-dimensional space by truncating its Fourier series to M terms, the approximation error decays at least as fast as 1/M^{s/d}. Can we do better than this, particularly in high-dimensions where the error decay can be very slow if s \ll d?

One must be careful about how one phrases this question. Suppose I ask “what is the best way of approximating a function f“? A subversive answer is that we may approximate f by a single-parameter approximation of the form \hat{f} = af with a=1! Consequently, there is a one-parameter approximation procedure that approximates every function perfectly. The problem with this one-parameter approximation is obvious: the one-parameter approximation is terrible at approximating most functions different than f. Thus, the question “what is the best way of approximating a particular function?” is ill-posed. We must instead ask the question “what is the best way of approximating an entire class of functions?” For us, the class of functions shall be those which are sufficiently smooth: specifically, we shall consider the class of functions whose Sobolev norm satisfies a bound \|f\|_{H^s} \le B. Call this class W.

As outlined at the beginning, an approximation procedure usually begins by taking the function f and writing down a list of M numbers a_1,\ldots,a_M. Then, from this list of numbers we reconstruct a function \hat{f} which serves as an approximation to f. Formally, this can be viewed as a mathematical function \Phi which takes f \in W to a tuple (a_1,\ldots,a_M) \in \mathbb{C}^d followed by a function \Lambda which takes (a_1,\ldots,a_M) and outputs a continuous 2\pi-periodic function \hat{f} \in C_{2\pi}(\mathbb{R}).

(10)   \begin{equation*} f \stackrel{\Phi}{\longmapsto}(a_1,\ldots,a_M)\stackrel{\Lambda}{\longmapsto} \hat{f}, \quad \Phi : W \to \mathbb{C}^M, \quad \Lambda : \mathbb{C}^M \to C_{2\pi}(\mathbb{R}). \end{equation*}

Remarkably, there is a mathematical theory which gives sharp bounds on the expressive power of any approximation procedure of this type. This theory of nonlinear widths serves as a sort of speed limit in approximation theory: no method of approximation can be any better than the theory of nonlinear widths says it can. The statement is somewhat technical, and we advise the reader to look up a precise statement of the result before using it any serious work. Roughly, the theory of nonlinear widths states that for any continuous approximation procedure6That is, an approximation procedure for which \Phi : W \to \mathbb{C}^d is a continuous function where W is equipped with the topology defined by the norm \|\cdot\|_{H^s}. that is able to approximate every function in W with L^2 approximation error no more than \epsilon, the number of parameters M must be at least some constant multiple of \epsilon^{-d/s}. Equivalently, the worst-case approximation error for a function in W with an M parameter continuous approximation is at least some multiple of 1/M^{s/d}.

In particular, the theory of nonlinear widths states that the approximation property of truncated Fourier series are as good as any method for approximating functions in the class W, as they exactly meet the “speed limit” given by the theory of nonlinear widths. Thus, approximating using truncated Fourier series is, in a certain very precise sense, as good as any other approximation technique you can think of in approximating arbitrary functions from W: splines, rational functions, wavelets, and artificial neural networks must follow the same speed limit. Make no mistake, these other methods have definite advantages, but degree of approximation for the class W is not one of them. Also, note that the theory of nonlinear widths shows that the curse of dimensionality is not merely an artifact of Fourier series; it affects all high-dimensional approximation techniques.

For the interested reader, see the following footnotes for two important ways one may perform approximations better than the theory of nonlinear widths within the scope of its rules.7The theory of nonlinear widths holds for continuous methods of approximation. This means that discontinuous approximation procedures may circumvent its bounds. Indeed, such discontinuous approximation procedures exist using probabilistic techniques. These methods are of questionable use in practice since discontinuous approximation procedures, by their nature, are extremely sensitive to the perturbations which are ubiquitous in performing computations on computers.8The theory of nonlinear widths holds for means of approximating the entire class W. More efficient methods may exist for meaningful subclasses of W. For instance, Mhaskar and Poggio show that for functions f satisfying a compositional property, that they can effectively be approximated by multilayer artificial neural networks.

Upshot: The smoother a function is, the better it can be approximated. Specifically, one can approximate a function on d dimensions with s nice derivatives with approximation error decaying with rate at least 1/M^{s/d}. In the case of 2\pi-periodic functions, such an approximation can easily be obtained by truncating the function’s Fourier series. This error decay rate is the best one can hope for to approximate all functions of this type.

Big Ideas in Applied Math: The Schur Complement

Given the diversity of applications of mathematics, the field of applied mathematics lacks a universally accepted set of core concepts which most experts would agree all self-proclaimed applied mathematicians should know. Further, much mathematical writing is very carefully written, and many important ideas can be obscured by precisely worded theorems or buried several steps into a long proof.

In this series of blog posts, I hope to share my personal experience with some techniques in applied mathematics which I’ve seen pop up many times. My goal is to isolate a single particularly interesting idea and provide a simple explanation of how it works and why it can be useful. In doing this, I hope to collect my own thoughts on these topics and write an introduction to these ideas of the sort I wish I had when I was first learning this material.

Given my fondness for linear algebra, I felt an appropriate first topic for this series would be the Schur Complement. Given matrices A, B, C, and D of sizes n\times n, n\times m, m\times n, and m\times m with A invertible, the Schur complement is defined to be the matrix D - CA^{-1}B.

The Schur complement naturally arises in block Gaussian elimination. In vanilla Gaussian elimination, one begins by using the (1,1)-entry of a matrix to “zero out” its column. Block Gaussian elimination extends this idea by using the n\times n submatrix occupying the top-left portion of a matrix to “zero out” all of the first n columns together. Formally, given the matrix \begin{bmatrix} A & B \\ C & D \end{bmatrix}, one can check by carrying out the multiplication that the following factorization holds:

(1)   \begin{equation*} \begin{bmatrix} A & B \\ C & D \end{bmatrix} = \begin{bmatrix} I_n & 0_{n\times m} \\ CA^{-1} & I_m\end{bmatrix} \begin{bmatrix} A & B \\ 0_{m\times n} & D - CA^{-1}B \end{bmatrix}. \end{equation*}

Here, we let I_j denote an identity matrix of size j\times j and 0_{j\times k} the j\times k zero matrix. Here, we use the notation of block (or partitioned) matrices where, in this case, a (m+n)\times (m+n) matrix is written out as a 2\times 2 “block” matrix whose entries themselves are matrices of the appropriate size that all matrices occurring in one block row (or column) have the same number of rows (or columns). Two block matrices which are blocked in a compatible way can be multiplied just like two regular matrices can be multiplied, taking care of the noncommutativity of matrix multiplication.

The Schur complement naturally in the expression for the inverse of \begin{bmatrix} A & B \\ C & D\end{bmatrix}. One can verify that for a block triangular matrix M = \begin{bmatrix} M_{11} & M_{12} \\ 0_{m\times n} & M_{22}\end{bmatrix}, we have the inverse formula

(2)   \begin{equation*} M^{-1} = \begin{bmatrix} M_{11} & M_{12} \\ 0_{m\times n} & M_{22}\end{bmatrix}^{-1} = \begin{bmatrix} M_{11}^{-1} & -M_{11}^{-1} M_{12}M_{22}^{-1} \\ 0_{m\times n} & M_{22}^{-1}\end{bmatrix}. \end{equation*}

(This can be verified by carrying out the block multiplication MM^{-1} for the proposed formula for M^{-1} and verifying that one obtains the identity matrix.) A similar formula holds for block lower triangular matrices. From here, we can deduce a formula for the inverse of \begin{bmatrix} A & B \\ C & D\end{bmatrix}. Let S = D - BA^{-1}C be the Schur complement. Then

(3)   \begin{equation*} \begin{split} \begin{bmatrix} A & B \\ C & D \end{bmatrix}^{-1} &= \begin{bmatrix} A & B \\ 0_{m\times n} & S \end{bmatrix}^{-1} \begin{bmatrix} I_n & 0_{n\times m} \\ CA^{-1} & I_m\end{bmatrix}^{-1} \\ &= \begin{bmatrix} A^{-1} & -A^{-1}BS^{-1} \\ 0_{m\times n} & S^{-1} \end{bmatrix} \begin{bmatrix} I_n & 0_{n\times m} \\ -CA^{-1} & I_m\end{bmatrix} \\ &= \begin{bmatrix} A^{-1} + A^{-1}BS^{-1}CA^{-1} & -A^{-1}BS^{-1} \\ -S^{-1}CA^{-1} & S^{-1} \end{bmatrix}. \end{split} \end{equation*}

This remarkable formula gives the inverse of \begin{bmatrix} A & B \\ C & D\end{bmatrix} in terms of A^{-1}, S^{-1}, B, and C. In particular, the (2,2)-block entry of \begin{bmatrix} A & B \\ C & D\end{bmatrix}^{-1} is simply just the inverse of the Schur complement.

Here, we have seen that if one starts with a large matrix and performs block Gaussian elimination, one ends up with a smaller matrix called the Schur complement whose inverse appears in inverse of the original matrix. Very often, however, it benefits us to run this trick in reverse: we begin with a small matrix, which we recognize to be the Schur complement of a larger matrix. In general, dealing with a larger matrix is more difficult than a smaller one, but very often this larger matrix will have special properties which allow us to more efficiently compute the inverse of the original matrix.

One beautiful application of this idea is the Sherman-Morrison-Woodbury matrix identity. Suppose we want to find the inverse of the matrix A - CD^{-1}B. Notice that this is the Schur complement of the matrix \begin{bmatrix} D & C \\ B & A \end{bmatrix}, which is the same \begin{bmatrix} A & B \\ C & D \end{bmatrix} after reordering.1Specifically, move the switch the first n rows with the last m rows and do the same with the columns. This defines a permutation matrix P = \begin{bmatrix} 0_{m\times n} & I_m \\ I_n & 0_{n\times m} \end{bmatrix} such that \begin{bmatrix} D & C \\ B & A \end{bmatrix} = P\begin{bmatrix} A & B \\ C & D \end{bmatrix}P^\top. Alternately, and perhaps more cleanly, one may define two Schur complements of the block matrix M = \begin{bmatrix} A & B \\ C & D \end{bmatrix}: one by “eliminating A“, M/A = D - CA^{-1}B, and the other by “eliminating D“, M/D = A - BD^{-1}C. Following the calculation in Eq. (3), just like the inverse of the Schur complement D - BA^{-1}C appears in the (2,2) entry of \begin{bmatrix} A & B \\ C & D \end{bmatrix}^{-1}, the inverse of the alternate Schur complement A - CD^{-1}B can be shown to appear in the (1,1) entry of \begin{bmatrix} A & B \\ C & D \end{bmatrix}^{-1}. Thus, comparing with Eq. (3), we deduce the Sherman-Morrison-Woodbury matrix identity:

(4)   \begin{equation*} (A - CD^{-1}B)^{-1} = A^{-1} + A^{-1}C(D-BA^{-1}C)^{-1}BA^{-1}. \end{equation*}

To see how this formula can be useful in practice, suppose that we have a fast way of solving the system linear equations Ax = b. Perhaps A is a simple matrix like a diagonal matrix or we have already pre-computed an LU factorization for A. Consider the problem of solving the rank-one updated problem (A+uv^\top)x = b. Using the Sherman-Morrison-Woodbury identity with C=u, D=-1, and B = v^\top, we have that

(5)   \begin{equation*} x = (A+uv^\top)^{-1}b = A^{-1}b + A^{-1}u (-1-v^\top A^{-1}u)^{-1}v^\top A^{-1}b, \end{equation*}

Careful observation of this formula shows how we can compute x (solving (A+uv^\top)x = b) by only solving two linear systems Ax_1 = b for x_1 = A^{-1}b and Ax_2 = u for x_2 = A^{-1}u.2Further economies can be saved if one has already previously computed x_1, which may be the case in many applications.

Here’s another variant of the same idea. Suppose we want solve the linear system of equation (D + uv^\top)x = b where D is a diagonal matrix. Then we can immediately write down the lifted system of linear equations

(6)   \begin{equation*} \underbrace{\begin{bmatrix} -1 & v^\top \\ u & D \end{bmatrix}}_{:=M}\begin{bmatrix} y \\ x \end{bmatrix} = \begin{bmatrix} 0 \\ b \end{bmatrix}. \end{equation*}

One can easily see that D+uv^\top is the Schur complement of the matrix M (with respect to the (1,1) block). This system of linear equations is sparse in the sense that most of its entries are zero and can be efficiently solved by sparse Gaussian elimination, for which there exists high quality software. Easy generalizations of this idea can be used to effectively solve many “sparse + low-rank” problems.

Another example of the power of the Schur complement are in least-squares problems. Consider the problem of minimizing \|Ax - b\|, where A is a matrix with full column rank and \|\cdot\| is the Euclidean norm of a vector \|x\|^2 = x^\top x. It is well known that the solution x satisfies the normal equations A^\top A x = A^\top b. However, if the matrix A is even moderately ill-conditioned, the matrix A^\top A will be much more ill-conditioned (the condition number will be squared), leading to a loss of accuracy. It is for this reason that it is preferable to solve the least-squares problem with QR factorization. However, if QR factorization isn’t available, we can use the Schur complement trick instead. Notice that A^\top A is the Schur complement of the matrix \begin{bmatrix} -I_{m} & A \\ A^\top & 0_{n\times n} \end{bmatrix}. Thus, we can solve the normal equations by instead solving the (potentially, see footnote) much better-conditioned system3More precisely, one should scale the identity in the (1,1) block of this system to be on the order of the size of the entries in A. The conditioning is sensitive to the scaling. If one selects a scale s = 1/\sqrt{2}\cdot \sigma_{\rm min}(A) to be proportional to the smallest singular value \sigma_{\rm min}(A) of A and constructs M = \begin{bmatrix} -sI_m & A \\ A^\top & 0_{n\times n}\end{bmatrix}, then one can show that the two-norm condition number of M no more than twice that of A. If one picks s to be roughly equal to the largest singular value \sigma_{\rm max}(A), then the two-norm condition number of M is roughly the square of the condition number of A. The accuracy of this approach may be less sensitive to the scaling parameter than this condition number analysis suggests.

(7)   \begin{equation*} \begin{bmatrix} -I_{m} & A \\ A^\top & 0_{n\times n} \end{bmatrix} \begin{bmatrix} r \\ x \end{bmatrix} = \begin{bmatrix} b \\ 0_n \end{bmatrix}. \end{equation*}

In addition to (often) being much better-conditioned, this system is also highly interpretable. Multiplying out the first block row gives the equation -r + Ax = b which simplifies to r = Ax - b. The unknown r is nothing but the least-squares residual. The second block row gives A^\top r = 0_n, which encodes the condition that the residual is orthogonal to the range of the matrix A. Thus, by lifting the normal equations to a large system of equations by means of the Schur complement trick, one derives an interpretable way of solving the least-squares problem by solving a linear system of equations, no QR factorization or ill-conditioned normal equations needed.

The Schur complement trick continues to have use in areas of more contemporary interest. For example, the Schur complement trick plays a central role in the theory of sequentially semiseparable matrices which is a precursor to many recent developments in rank-structured linear solvers. I have used the Schur complement trick myself several times in my work on graph-induced rank-structures.

Upshot: The Schur complement appears naturally when one does (block) Gaussian elimination on a matrix. One can also run this process in reverse: if one recognizes a matrix expression (involving a product of matrices potentially added to another matrix) as being a Schur complement of a larger matrix , one can often get considerable dividends by writing this larger matrix down. Examples include a proof of the Sherman-Morrison-Woodbury matrix identity (see Eqs. (3-4)), techniques for solving a low-rank update of a linear system of equations (see Eqs. (5-6)), and a stable way of solving least-squares problems without the need to use QR factorization (see Eq. (7)).

Additional resource: My classmate Chris Yeh has a great introduction to the Schur complements focusing more on positive semidefinite and rank-deficient matrices.


Edits: This blog post was edited to clarify the conditioning of the augmented linear system Eq. (7) and to include a reference to Chris’ post on Schur complements.

Book Review: Matrix Theory by Fuzhen Zhang

Linear algebra and matrix theory is a difficult subject to learn, with the landscape of textbooks balkanized into matrix-oriented introductory texts oriented towards first- and second-year engineering and science students, vector space-oriented intermediate texts geared towards one of the first proof-based mathematics courses for a mathematics student, and more advanced texts like Bhatia’s Matrix Analysis which are demanding and challenging mathematical journeys intended for graduate students and researchers. For the student interested in numerical analysis and scientific computation, there can be a huge gap from, say, Gilbert Strang’s Introduction to Linear Algebra or Sheldon Axler’s Linear Algebra Done Right to Rajendra Bhatia’s Matrix Analysis.

Matrix Theory: Basic Results and Techniques by Fuzhen Zhang beautifully fills this gap.

I discovered the book by chance and was pleasantly surprised to find a book packed with useful insights. The book’s subtitle, basic results and techniques, is appropriate, as the book places a large emphasis on developing the reader’s “matrix kung-fu” as the author describes it. In a particularly striking example of this early in the book, Zhang furnishes four proofs that AB and BA have the same nonzero eigenvalues. This is an act of pedagogical brilliance that more textbooks would be wise to replicate. By presenting multiple proofs, the author shows the reader that there are often many paths to the same result. At the same time, Zhang shows the reader the benefit of having a wide toolbox by showing how one of the four methods is unable to give any information about the multiplicity of the eigenvalues, while the others are. Zhang’s careful focus on techniques is careful and illuminating, demonstrating many different examples when the proof technique is useful as well as examples where the technique falls short in solving a problem.  Rather than merely teaching results, Matrix Theory teaches its reader tools with which to derive results.

One of the tools the book places emphasis on right from the start are block (or partitioned) matrices. To me, reasoning about matrices and linear operators by using 2\times 2 blocks is an absolutely indispensable tool, which I only learned after having taken three courses on linear algebra. Here, the emphasis is appropriately on block matrices throughout the book, beginning with the second chapter which is entirely dedicated to this useful approach.

The book has excellent coverage, stretching from introductory results through canonical forms to Kronecker and Hadamard products to sections on most important classes of matrices (e.g. nilpotent, tridiagonal, circulant, Vandermonde, Hadamard, doubly stochastic, nonnegative, unitary, contraction, positive (semi)definite, Hermitian, and normal matrices) to majorization and matrix inequalities. The book’s excellent exposition is complemented by an excellent suite of appropriately difficult exercises.

The book is a terrific reference on the tips and tricks of working with matrices and is excellent preparation for a more advanced monograph on matrix theory. I cannot endorse enough this underappreciated gem of a textbook.