Randomized Kaczmarz is Asympotically Unbiased for Least Squares

I am delighted to share that my paper Randomized Kaczmarz with tail averaging, joint with Gil Goldshlager and Rob Webber, has been posted to arXiv. This paper in particular really benefited from a lot of feedback from discussions with friends, colleagues, and experts, and I’d like to thank everyone who gave us feedback on this paper.

In this post, I want to provide a different and complementary perspective on the results of this paper, and provide some more elementary results and derivations that didn’t make the main paper.

The randomized Kaczmarz (RK) method is an iterative method for solving systems of linear equations Ax = b, whose dimensions will be n\times d throughout this post. Beginning from a trivial initial solution x_0=0, the method works by repeating the following two steps for t = 0,1,2,\ldots:

  1. Randomly sample a row i_t of A with probability

        \[\prob\{ i_t = j\} = \frac{\norm{a_j}^2}{\norm{A}_{\rm F}^2}.\]

    Throughout this post, a_j^\top denotes the jth row of A.
  2. Orthogonally project x_t onto the solution space of the equation a_{i_t}^\top x = b_{i_t}, obtaining x_{t+1} \coloneqq x_t + (b_{i_t} - a_{i_t}^\top x_t)/\norm{a_{i_t}}^2\cdot a_{i_t}.

The main selling point of RK is that it only interacts with the matrix A through row accesses, which makes the method ideal for very large problems where only a few rows of A can fit in memory at a time.

When applied to a consistent system of linear equations (i.e., a system possessing a solution x_\star satisfying Ax_\star = b), RK is geometrically convergent:

(1)   \[\expect\left[ \norm{x_t - x_\star}^2 \right] \le (1 - \kappa_{\rm dem}^{-2})^t \norm{x_\star}^2. \]

Here,

(2)   \[\kappa_{\rm dem} = \frac{\norm{A}_{\rm F}}{\sigma_{\rm min}(A)} = \sqrt{\sum_i \left(\frac{\sigma_i(A)}{\sigma_{\rm min}(A)}\right)^2} \]

is known as the Demmel condition number, and \sigma_i(A) are the singular values of A.1Personally, I find it convenient to write the Demmel condition number as \kappa_{\rm dem} = \kappa \cdot \sqrt{\operatorname{srank}(A)}, where \kappa = \sigma_{\rm max}(A) / \sigma_{\rm min}(A) is the ordinary condition number and \operatorname{srank}(A) = \norm{A}_{\rm F}^2/\sigma_{\rm max}^2(A) is the stable rank of the matrix A. The stable rank is a smooth proxy for the rank or dimensionality of the matrix A. Using this parameter, the rate of convergence is roughly \e^{-t / (\kappa^2 \operatorname{srank}(A))}, so it takes roughly \kappa^2 \operatorname{srank}(A) row accesses to reduce the error by a constant factor. Compare this to gradient descent, which requires \kappa^2 n row accesses. However, for inconsistent problems, RK does not converge at all. Indeed, since each step of RK updates the iterate x_t such that an equation a_{i_t}^\top x = b_{i_t} hold exactly and no solution x_\star satisfies all the equations simultaneously, the RK iterates continue to stochastically fluctuate no matter how long the algorithm is run.

However, while the RK iterates continue to randomly fluctuate when applied to an inconsistent system, their expected value does converge. In fact, it converges to the least-squares solution2If the matrix A is rank-deficient, then we define x_\star to be the minimum-norm least-squares solution x_\star = A^\dagger b, which can be expressed using the Moore–Penrose pseudoinverse {}^\dagger. x_\star to the system, defined as

    \[x_\star = \operatorname{argmin}_{x \in \real^d} \norm{b - Ax}.\]

Put differently, the bias of the RK iterates x_t as estimators to the least-squares solution x_\star converge to zero, and the rate of convergence is geometric. Specifically, we have the following theorem:

Theorem 1 (Exponentially Decreasing Bias): The RK iterates x_t have an exponentially decreasing bias

(3)   \[\norm{\mathbb{E}[x_t] - x_\star}^2 \le (1 - \kappa_{\rm dem}^{-2})^{2t} \norm{x_\star}^2. \]

Observe that the rate of convergence (3) for the bias is twice as fast as the rate of convergence for the error in (1). This factor of two was previously observed by Gower and Richtarik in the context of consistent systems of equations.

The proof of Theorem 1 is straightforward, and we will present it at the bottom of this post. First, we will discuss a couple of implications. First, we develop convergent versions of the RK algorithm using tail averaging. Second, we explore what happens when we implement RK with different sampling probabilities \prob \{i_t = j\}.

Tail Averaging

It may seem like Theorem 1 has little implication for practice. After all, just because the expected value of x_t becomes closer and closer to x_\star, it need not be the case that x_t is close to x. However, we can improve the quality of the approximate solution x_t by averaging.

There are multiple different possible ways we could use averaging. A first idea would be to run RK multiple times, obtaining multiple solutions x^{(1)}_t,\ldots,x^{(m)}_t which could then be averaged together. This approach is inefficient as each solution x_t^{(i)} is computed separately.

A better strategy is tail averaging. Fix a burn-in time t_{\rm b}, chosen so that the bias \norm{\expect[x_{t_{\rm b}}] - x_\star} is small. For each s> t_{\rm b}, x_s is a nearly unbiased approximation to the least-squares solution x_\star. To reduce variance, we can average these estimators together

    \[\overline{x}_t = \frac{x_{t_{\rm b} +1} + x_{t_{\rm b}+2} + \cdots + x_t}{t-t_{\rm b}}.\]

The estimator \overline{x}_t is the tail-averaged randomized Kaczmarz (TARK) estimator. By Theorem 1, we know the TARK estimator has an exponentially small bias:

    \[\norm{\expect[\overline{x}_t] - x_\star} \le (1 - \kappa_{\rm dem}^{-2})^{2(t_{\rm b}+1)} \norm{x_\star}^2.\]

In our paper, we also prove a bound for the mean-square error:

    \[\expect [\norm{\overline{x}_t - x_\star}^2] \le (1-\kappa_{\rm dem}^{-2})^{t_{\rm b}+1} \norm{x_\star}^2 + \frac{2\kappa_{\rm dem}^4}{t-t_{\rm b}} \frac{\norm{b-Ax_\star}^2}{\norm{A}_{\rm F}^2}.\]

We see that the rate of convergence is geometric in the burn-in time t_{\rm b} and occurs at an algebraic, Monte Carlo, rate in the final time t. While the Monte Carlo rate of convergence may be unappealing, it is known that this rate of convergence is optimal for any method that accesses row–entry pairs (a_i,b_i); see our paper for a new derivation of this fact.

Tail averaging can be an effective method for squeezing a bit more accuracy out of the RK method for least-squares problems. The figure below shows the error of different RK methods applied to a random least-squares problem, including plain RK, RK with thread averaging (RKA), and RK with underrelaxation (RKU);3The number of threads is set to 10, and the underrelaxation parameter is 1/\sqrt{t}. We found this underrelaxation parameter to lead to a smaller error than the other popular underrelaxation parameter schedule 1/t. see the paper’s Github for code. For this problem, tail-averaged randomized Kaczmarz achieves the lowest error of all of the methods considered, being 6× smaller than RKA, 22× smaller than RK, and 10⁶× smaller than RKU.

Error versus iteration for different randomized Kaczmarz methods. Tail-averaged randomized Kaczmarz shows the lowest error

What Least-Squares Solution Are We Converging To?

A second implication of Theorem 1 comes from reanalyzing the RK algorithm if we change the sampling probabilities. Recall that the standard RK algorithm draws random row indices i_t using squared row norm sampling:

    \[\prob \{ i_t = j \} = \frac{\norm{a_j}^2}{\norm{A}_{\rm F}^2} \eqqcolon p^{\rm RK}_j.\]

We have notated the RK sampling probability for row j as p_j^{\rm RK}.

Using the standard RK sampling procedure can sometimes be difficult. To implement it directly, we must make a full pass through the matrix to compute the sampling probabilities.4If we have an upper bound on the squared row norms, there is an alternative procedure based on rejection sampling that avoids this precomputation step. It can be much more convenient to sample rows i_t uniformly at random \prob \{ i_t = j \} = 1/n.

To do the following analysis in generality, consider performing RK using general sampling probabilities p_j:

    \[\prob \{i_t = j\} = p_j.\]

What happens to the bias of the RK iterates then?

Define a diagonal matrix

    \[D \coloneqq \diag\left( \sqrt{\frac{p_j}{p_j^{\rm RK}}} : j =1,\ldots,n\right).\]

One can check that the RK algorithm with non-standard sampling probabilities \{p_j\} is equivalent to the standard RK algorithm run on the diagonally reweighted least-squares problem

    \[x_{\rm weighted} = \argmin_{x\in\real^d} \norm{Db-(DA)x}.\]

In particular, applying TARK with uniform sampling probabilities, the tail-averaged iterates \overline{x}_t will converge to the weighted least-squares problem x_{\rm weighted} rather than the original least-squares solution x_\star.

I find this very interesting. In the standard RK method, the squared row norm sampling distribution is chosen to ensure rapid convergence of the RK iterates to the solution of a consistent system of linear equations. However, for a consistent system, the RK method will always converge to the same solution no matter what row sampling strategy is chosen (as long as every non-zero row has a positive probability of being picked). In the least-squares context, however, the conclusion is very different: the choice of row sampling distribution not only affects the rate of convergence, but also which solution is being converged to!

As the plot below demonstrates, the original least-squares solution and re-weighted least-squares solution can sometimes be quite different from each other. This plot shows the results of fitting a function with many kinks (show as a black dashed curve) using a polynomial function p(u) = \sum_{j=0}^{10} x_j u^j[mfn}Note that, for this experiment we represent the polynomial p(u) = \sum_{i=0}^{10} x_j u^j using its monomial coefficients x_j, which has issues with numerical stability. It’s better to use a representation using Chebyshev polynomials. We use this example only to illustrate the difference between the weighted and original least-squares solution.[/mfn} at 10000 equispaced points. We compare the unweighted least-squares solution (orange solid curve) to the weighted least-squares solution using uniform RK weights p_j = 1/n (blue dash-dotted curve). These two curves differ meaningfully, with the weighted least-squares solution having higher error at the ends of the interval but more accuracy in the middle. These differences can be explained looking at the weights (diagonal entries of D, grey dotted curve), which are lower at the ends of the interval than in the center.

Weighted and un-weighted least-squares solution to polynomial regression problem. (Tail averaged) randomized Kaczmarz method with uniform sampling converges to the weighted least-squares solution, which is more accurate on the middle of the interval (where the weights are high) than the edges of the interval (where the weights are smaller)

Does this diagonal rescaling issue matter? Sometimes not. In many applications, the weighted and un-weighted least squares solutions will both be fine. Indeed, in the above example, neither the weighted nor un-weighted solutions are the “right” one; the weighted solution is more accurate in the interior of the domain and less accurate at the boundary. However, sometimes getting the true least-squares solution matters, or the amount of reweighting done by uniform sampling is too aggressive for a problem. In these cases, using the classical RK sampling probabilities may be necessary. Fortunately, rejection sampling can often be used to perform squared row norm sampling; see this blog post of mine for details.

Proof of Theorem 1

Let us now prove Theorem 1, the asymptotic unbiasedness of randomized Kaczmarz. We will assume throughout that A is full-rank; the rank-deficient case is similar but requires a bit more care.

Begin by rewriting the update rule in the following way

    \[x_{t+1} = \left( I - \frac{a_{i_t}^{\vphantom{\top}}a_{i_t}^\top}{\norm{a_{i_t}}^2} \right)x_t + \frac{b_{i_t}a_{i_t}}{\norm{a_{i_t}}^2}.\]

Now, letting \expect_{i_t} denote the average over the random index i_t, we compute

    \begin{align*}\expect_{i_t}[x_{t+1}] &= \sum_{j=1}^n \left[\left( I - \frac{a_j^{\vphantom{\top}} a_j^\top}{\norm{a_j}^2} \right)x_t + \frac{b_ja_j}{\norm{a_j}^2}\right] \prob\{i_t=j\}\\ &=\sum_{j=1}^n \left[\left( \frac{\norm{a_j}^2}{\norm{A}_{\rm F}^2}I - \frac{a_j^{\vphantom{\top}} a_j^\top}{\norm{A}_{\rm F}^2} \right)x_t + \frac{b_ja_j}{\norm{A}_{\rm F}^2}\right].\end{align*}

We can evaluate the sums \sum_{j=1}^n a_j^{\vphantom{\top}}a_j^\top = A^\top A and \sum_{j=1}^n b_j a_j = A^\top b directly. Therefore, we obtain

    \[\expect_{i_t}[x_{t+1}] = \left( I - \frac{A^\top A}{\norm{A}_{\rm F}^2}\right) x_t + \frac{A^\top b}{\norm{A}_{\rm F}^2}.\]

Thus, taking the (total) expectation of both sides, we get

    \[\expect[x_{t+1}] = \left( I - \frac{A^\top A}{\norm{A}_{\rm F}^2}\right) \expect[x_t] + \frac{A^\top b}{\norm{A}_{\rm F}^2}.\]

Iterating this equation and using the initial condition x_0 = 0, we obtain

(4)   \[\expect[x_t] = \left[\sum_{i=0}^{t-1} \left( I - \frac{A^\top A}{\norm{A}_{\rm F}^2}\right)^i \right]\cdot\frac{A^\top b}{\norm{A}_{\rm F}^2}.\]


The equation (4) expresses the expected RK iterate \expect[x_t] using a matrix geometric series. Recall that the infinite geometric series with a scalar ratio |y|<1 satisfies the formula

    \[\sum_{i=0}^\infty y^i = (1-y)^{-1}.\]

Substituting x=1-y, we get

    \[\sum_{i=0}^\infty (1-x)^i = x^{-1}\]

for any 0 < x < 2. With a little effort, one can check that the same formula

    \[\sum_{i=0}^\infty (I-X)^i = X^{-1}\]

for a square matrix X satisfying 0 < \sigma_{\rm min}(X) \le \norm{X} < 2. These conditions hold for the matrix X = A^\top A / \norm{A}_{\rm F}^2 since A is full-rank, yielding

(5)   \[\left(\frac{A^\top A}{\norm{A}_{\rm F}^2}\right)^{-1} = \sum_{i=0}^\infty \left( I - \frac{A^\top A}{\norm{A}_{\rm F}^2}\right)^i. \]

In anticipation of plugging this result into (4), we can break the infinite sum on the right-hand side into two pieces:

(6)   \[\left(\frac{A^\top A}{\norm{A}_{\rm F}^2}\right)^{-1} = \sum_{i=0}^{t-1} \left( I - \frac{A^\top A}{\norm{A}_{\rm F}^2}\right)^i + \sum_{i=t}^\infty \left( I - \frac{A^\top A}{\norm{A}_{\rm F}^2}\right)^i.\]

We are at the home stretch. Plugging (6) into (4) and using the normal equations x_\star = (A^\top A)^{-1} A^\top b, we obtain

    \[\expect[x_t] = x_\star - \left[\sum_{i=t}^\infty \left( I - \frac{A^\top A}{\norm{A}_{\rm F}^2}\right)^i \right]\cdot\frac{A^\top b}{\norm{A}_{\rm F}^2}.\]

Rearrange to obtain

    \[\expect[x_t] - x_\star = - \left(I - \frac{A^\top A}{\norm{A}_{\rm F}^2}\right)^t \left[\sum_{i=0}^\infty \left( I - \frac{A^\top A}{\norm{A}_{\rm F}^2}\right)^i \right]\cdot\frac{A^\top b}{\norm{A}_{\rm F}^2}.\]

Now apply (5) and the normal equations again to obtain

    \[\expect[x_t] - x_\star = - \left(I - \frac{A^\top A}{\norm{A}_{\rm F}^2}\right)^t x_\star.\]

Take norms and use the submultiplicative property of the spectral norm to obtain

(7)   \[\norm{\expect[x_t] - x_\star}^2 \le \norm{I - \frac{A^\top A}{\norm{A}_{\rm F}^2}}^{2t} \norm{x}_\star^2. \]

To complete the proof we must evaluate the norm of the matrix I - A^\top A / \norm{A}_{\rm F}^2. Let A = U \Sigma V^\top be a (thin) singular value decomposition, where \Sigma = \diag(\sigma_{\rm max}(A),\ldots,\sigma_{\rm min}(A)). Then

    \[I - \frac{A^\top A}{\norm{A}_{\rm F}^2} = I - V \cdot\frac{\Sigma^2}{\norm{A}_{\rm F}^2} \cdot V^\top = V \left( I - \frac{\Sigma^2}{\norm{A}_{\rm F}^2}\right)V^\top.\]

We recognize this as a matrix whose eigenvectors are V and whose eigenvalues are the diagonal entries of the matrix

    \[I - \Sigma^2/\norm{A}_{\rm F}^2 = \diag(1 - \sigma^2_{\rm max}(A)/\norm{A}_{\rm F}^2,\ldots,1-\sigma_{\rm min}^2(A)/\norm{A}_{\rm F}^2).\]

All eigenvalues are nonnegative and the largest eigenvalue is 1 - \sigma_{\rm min}^2(A)/\norm{A}_{\rm F}^2 = 1 - \kappa_{\rm dem}^{-2}. We have invoked the definition of the Demmel condition number (2). Therefore, plugging into (7), we obtain

    \[\norm{\expect[x_t] - x_\star}^2 \le \left(1-\kappa_{\rm dem}^{-2}\right)^{2t} \norm{x}_\star^2,\]

as promised.

Rejection Sampling

I am delighted to share that my paper Embrace rejection: Kernel matrix approximation with randomly pivoted Cholesky has been released on arXiv, joint with Joel Tropp and Rob Webber.

On the occasion of this release, I want to talk about rejection sampling, one of the most powerful ideas in probabilistic computing. My goal is to provide an accessible and general introduction to this technique, with a more “applied math” rather than statistical perspective. I will conclude with some discussion of how we use it in our paper.

Motivating Application: Randomized Kaczmarz

Consider the task of solving a linear system of equations Ax = b, where A \in \real^{n\times d} is a matrix and b \in \real^n is a vector. For simplicity, we assume that this system is consistent (i.e., there exists a solution x_\star such that Ax_\star = b). The randomized Kaczmarz algorithm is a mathematically elegant algorithm for solving this problem, due to Strohmer and Vershynin. Beginning with initial solution x \gets 0, randomized Kaczmarz repeatedly performs the following steps:

  1. Sample. Generate a random row index I according to the probability distribution \prob \{I = i\} = \norm{a_i}^2 / \norm{A}_{\rm F}^2. Here, and going forward, a_i^\top denotes the ith row of A and \norm{\cdot}_{\rm F} is the Frobenius norm.
  2. Update. Set x \gets x + (b_I - a_I^\top x)\cdot a_I / \norm{a_I}^2.

Convergence theory of the randomized Kaczmarz iteration is very well-studied; see, e.g., section 4.1 of this survey for an introduction.

For this post, we will consider a more basic question: How can we perform the sampling step 1? If n and d are not very large, this is an easy problem: Simply sweep through all the rows a_i and compute their norms \norm{a_i}^2. Then, sampling can be done using any algorithm for sampling from a weighted list of items. But what if n is very large? Suppose, even, that it is computationally expensive to read through the matrix, even once, to compute the row norms. What can we do now?

Here is one solution is using rejection sampling, due to Needell, Ward, and Srebro. Suppose that we know a bound on the row norms of A:

(1)   \[\norm{a_i}^2 \le B \quad \text{for each } i = 1,\ldots,n.\]

For instance, if the entries of A take values in the range [-1,1], then (1) holds with B = d. To sample a row i with probability \norm{a_i}^2/\norm{A}_{\rm F}^2, we perform the following the rejection sampling procedure:

  1. Propose. Draw a random row index 1\le J\le n uniformly at random (i.e., J is equally likely to be any row index between 1 and n.)
  2. Accept/reject. Make a random choice: With probability p_J = \norm{a_J}^2/B, accept and set I\gets J. With the remaining probability 1-p_J, reject and return to step 1.

Why does this procedure work? As an intuitive explanation, notice that each row i is equally likely to be proposed and has probability proportional to \norm{a_i}^2 of being accepted. Therefore, under this procedure, each row i has probability proportional to \norm{a_i}^2 of being selected, just as desired for randomized Kaczmarz. If this intuitive justification is unsatisfying, we’ll have a formal derivation of correctness below (in a more general context).

I find rejection sampling to be super neat. In this case, it allows us to use simple ingredients (uniform row sampling and an upper bound on the row norms) to accomplish a more complicated task (sampling according to the squared row norms). We can sample from an interesting probability distribution (the squared row norm distribution) by thinning out samples from a simple distribution. This is a powerful idea that has many applications. (And, I believe, many more applications are waiting to be discovered.)

Rejection Sampling in General

Let’s now discuss rejection sampling in a bit more generality. Say I have a list of n things each of which has a weight w_i\ge 0. Our goal is to choose a random index I with probability proportional to w_i, i.e., \prob \{ I = i\} = w_i / \sum_{j=1}^n w_j. We will call w the target distribution. (Note that we do not require the weights w_i to be normalized to satisfy \sum_{i=1}^n w_i =1.)

Suppose that sampling from the target distribution is challenging, but we can sample from a proposal distribution \rho, i.e., we can efficiently generate random J such that \prob \{J = i\} = \rho_i / \sum_{j=1}^n \rho_j. Further, suppose that the proposal distribution dominates the target distribution in the sense that

    \[w_i \le \rho_i \quad \text{for each } i=1,\ldots,n.\]

Under this general setting, we can sample I from the target distribution using rejection sampling, similar to above:

  1. Propose. Draw a random row index J from the proposal distribution.
  2. Accept/reject. Make a random choice: With probability p_J = w_J/\rho_J, accept and set I\gets J. With the remaining probability 1-p_J, reject and return to step 1.

We recognize the squared row-norm sampling above as an example of this general strategy with w_i = \norm{a_i}^2 and \rho_i = B for all i.

Let us now convince ourselves more carefully that rejection sampling works. On any given execution of the rejection sampling loop, the probability of proposing index i is \rho_i / \sum_{j=1}^n \rho_j, after which the probability of accepting i is w_i / \rho_i. Therefore, the probability of outputting i at any given loop is

    \[\prob \{\text{$i$ accepted this loop}\} = \frac{\rho_i}{\sum_{j=1}^n \rho_j} \cdot \frac{w_i}{\rho_i} = \frac{w_i}{\sum_{j=1}^n \rho_j}.\]

The probability that any index i on that loop is accepted is thus

    \[\prob \{\text{any accepted this loop}\} = \sum_{i=1}^n \prob \{\text{$i$ accepted this loop}\} = \frac{\sum_{i=1}^n w_i}{\sum_{i=1}^n \rho_i}.\]

The rejection sampling loop accepts with fixed positive probability each execution, so it eventually accepts with 100% probability. Thus, the probability of accepting i conditional on some index being accepted is

    \[\prob \{\text{$i$ accepted this loop} \mid \text{any accepted this loop}\} = \frac{\prob \{\text{$i$ accepted this loop}\}}{\prob \{\text{any accepted this loop}\}} = \frac{w_i}{\sum_{j=1}^n w_j}.\]

Therefore, rejection sampling outputs a sample I from the target distribution w as promised.

As this analysis shows, the probability of accepting on any given execution of the rejection sampling loop is \sum_{i=1}^n w_i / \sum_{i=1}^n \rho_i, the ratio of the total mass of the target w_i to the total mass of the proposal \rho_i. Thus, rejection sampling will have a high acceptance rate if w_i \approx \rho_i and a low acceptance rate if w_i \ll \rho_i. The conclusion for practice is that rejection sampling is only computationally efficient if one has access to a good proposal distribution \rho_i, that is both easy to sample from and close to the target w_i\approx \rho_i.

Here, we have presented rejection sampling as a strategy for sampling from a discrete set of items i=1,\ldots,n, but this not all that rejection sampling can do. Indeed, one can also use rejection sampling for sampling a real-valued parameters x \in \real or a multivariate parameter x \in \real^n from a given (unnormalized) probability density function (or, if one likes, an unnormalized probability measure).

Before moving on, let us make one final observation. A very convenient thing about rejection sampling is that we only need the unnormalized probability weights w_i and \rho_i to implement the procedure, even though the total weights \sum_{j=1}^n w_i and \sum_{j=1}^n \rho_i are necessary to define the sampling probabilities \prob \{I = i\} = w_i / \sum_{j=1}^n w_j and \prob \{J = i\} = \rho_i / \sum_{j=1}^n \rho_j. This fact is necessary for many applications of rejection sampling. For instance, in the randomized Kaczmarz use, rejection sampling would be much less useful if we needed the total weight \sum_{i=1}^n \norm{a_i}^2 = \norm{A}_{\rm F}^2, as computing \norm{A}_{\rm F}^2 requires a full pass over the data to compute.

Application: Randomly pivoted Cholesky

Another application of rejection sampling appears is to accelerate the randomly pivoted Cholesky algorithm, the subject of my recent paper. Here, I’ll provide an overview of the main idea with a focus on the role of rejection sampling in the procedure. See the paper for more details. Warning: Even as simplified here, this section presents a significantly more involved application of rejection sampling than we saw previously with randomized Kaczmarz!

Randomly pivoted Cholesky (RPCholesky) is a simple, but powerful algorithm for computing a low-rank approximation to a symmetric positive semidefinite matrix A, and it can be used to accelerate kernel machine learning algorithms.

Conceptually, RPCholesky works as follows. Let A^{(0)} \coloneqq A denote the initial matrix and \hat{A}^{(0)} \coloneqq 0 denote a trivial initial approximation. For j=0,1,2,\ldots,k-1, RPCholesky performs the following steps:

  1. Random sampling. Draw a random index I_{j+1} with probability \prob \{ I_{j+1} = i\} = A^{(j)}_{ii} / \sum_{i=1}^n A^{(j)}_{ii}.
  2. Update approximation. Update \hat{A}^{(j+1)} \gets \hat{A}^{(j)} + a_i^{(j)}(a_i^{(j)})^\top / A_{ii}^{(j)}. Here, a_i^{(j)} denotes the ith column of A^{(j)}.
  3. Update matrix. Update A^{(j+1)} \gets A^{(j)} + a_i^{(j)}(a_i^{(j)})^\top / A_{ii}^{(j)}.

The result of this procedure is a rank-k approximation \hat{A}^{(k)}. With an optimized implementation, RPCholesky requires only O(k^2N) operations and evaluates just (k+1)N entries of the input matrix A.

The standard RPCholesky algorithm is already fast, but there is room for improvement. The standard RPCholesky method processes the matrix A one column a_i at a time, but modern computers are faster at processing blocks of columns. Using the power of rejection sampling, we can develop faster versions of RPCholesky that take advantage of blocked computations. These blocked implementations are important: While we could handle a million-by-million matrix using the standard RPCholesky method, blocked implementations can be up to 40\times faster. In this post, I’ll describe the simplest way of using rejection sampling to implement RPCholesky.

To set the stage, let’s first establish some notation and make a few observations. Let S_j = \{I_1,\ldots,I_j\} denote the first j selected random indices. With a little matrix algebra, one can show that the approximation \hat{A}^{(j)} produced by j steps of RPCholesky obeys the formula1For those interested, observe that \hat{A}^{(j)} is an example of a Nyström approximation. The connection between Nyström approximation and Cholesky decomposition is explained in this previous post of mine.

(1)   \[\hat{A}^{(j)} = A(:,S_j) A(S_j,S_j)^{-1}A(S_j,:). \]

Here, we are using MATLAB notation for indexing the matrix A. The residual matrix A^{(j)} is given by

    \[A^{(j)} = A - \hat{A}^{(j)} = A - A(:,S_j) A(S_j,S_j)^{-1}A(S_j,:).\]

Importantly for us, observe that we can evaluate the ith diagonal entry of A^{(j)} using the formula

(2)   \[A^{(j)}_{ii} = A_{ii} - A(i,S_j) A(S_j,S_j)^{-1} A(S_j,i).\]

Compared to operating on the full input matrix A, evaluating an entry A^{(j)}_{ii} is cheap, just requiring some arithmetic involving matrices and vectors of size j.

Here, we see a situation similar to randomized Kaczmarz. At each step of RPCholesky, we must sample a random index I_{j+1} from a target distribution w_i = A_{ii}^{(j)}, but it is expensive to evaluate all of the w_i‘s. This is a natural setting for rejection sampling. As proposal distribution, we may use the initial diagonal of A, \rho_i = A_{ii}. (One may verify that, as required, w_i = A_{ii}^{(j)} \le A_{ii} = \rho_i for all i.) This leads to a rejection sampling implementation for RPCholesky:

  1. Sample indices. For j=0,1,\ldots,k-1, sample random indices I_{j+1} from the target distribution w_i = A^{(j)}_{ii} using rejection sampling with proposal distribution \rho_i = A_{ii}. Entries A_{ii}^{(j)} are evaluated on an as-needed basis using (2).
  2. Build the approximation. Form the approximation \hat{A}^{(k)} all at once using (1).

By using rejection sampling, we have gone from an algorithm which handles the matrix one column at a time to a new algorithm which processes columns of the matrix A all at once through the formula (1). In the right situations, this new algorithm can be significantly faster than the original RPCholesky method.2In its essence, this rejection sampling algorithm for RPCholesky was first proposed in this paper of mine, which uses rejection sampling to apply RPCholesky to infinite-dimensional positive definite kernel operators. The ability to handle infinite-dimensional sampling problems is another great strength of the rejection sampling approach.

We have now seen two extreme cases: the standard RPCholesky algorithm that processes the columns of A one at a time and a rejection sampling implementation that operates on the columns of A all at once. In practice, the fastest algorithm sits between these extremes, using rejection sampling to sample column indices in moderate-size blocks (say, between 100 and 1000 columns at a time). This “goldilocks” algorithm is the accelerated RPCholesky method that we introduce in our paper. Check it out for details!

Conclusion: Rejection Sampling, an Algorithm Design Tool

We’ve now seen two applications, randomized Kaczmarz and randomly pivoted Cholesky, where rejection sampling can be used to speed up an algorithm. In both cases, we wanted to sample from a distribution over n row or column indices, but it was expensive to perform a census to compute the probability weight of each item individually. Rejection sampling offers a different approach, where we generate samples from a cheap-to-sample proposal distribution and only evaluate the probability weights of the proposed items. In the right situations, this rejection sampling approach can lead to significant computational speedups.

Rejection sampling has been used by statistics-oriented folks for decades, but the power of this technique seems less well-known to researchers in applied math, scientific computation, and related areas. I think there are a lot more exciting ways that we can use rejection sampling to design better, faster randomized algorithms.

Five Interpretations of Kernel Quadrature

I’m excited to share that my paper Kernel quadrature with randomly pivoted Cholesky, joint with Elvira Moreno, has been accepted to NeurIPS 2023 as a spotlight.

Today, I want to share with you a little about the kernel quadrature problem. To avoid this post getting too long, I’m going to write this post assuming familiarity with the concepts of reproducing kernel Hilbert spaces and Gaussian processes.

Integration and Quadrature

Integration is one of the most widely used operations in mathematics and its applications. As such, it is a basic problem of wide interest to develop numerical methods for evaluating integrals.

In this post, we will consider a quite general integration problem. Let \Omega\subseteq \real^d be a domain and let \mu be a (finite Borel) measure on \Omega. We consider the task of evaluating

    \[I[f] = \int_\Omega f(x) g(x) \, \mathrm{d}\mu(x).\]

One can imagine that g, \mu, and \Omega are fixed, but we may want to evaluate this same integral I[f] for multiple different functions f.

To evaluate, we will design a quadrature approximation to the integral I[f]:

    \[\hat{I}_{w,s}[f] = \sum_{i=1}^n w_i f(s_i) \approx I[f].\]

Concretely, we wish to find real numbers w = (w_1,\ldots,w_n) \in \real^n and points s = (s_1,\ldots,s_n) \in \Omega^n such that the approximation \hat{I}_{w,s}[f] \approx I[f] is accurate.

Smoothness and Reproducing Kernel Hilbert Spaces

As is frequently the case in computational mathematics, the accuracy we can expect for this integration problem depends on the smoothness of the integrand f. The more smooth f is, the more accurately we can expect to compute I[f] for a given budget of computational effort.

In this post, will measure smoothness using the reproducing kernel Hilbert space (RKHS) formalism. Let \mathcal{H} be an RKHS with norm \norm{\cdot}. We can interpret the norm as assigning a roughness \norm{f} to each function f. If \norm{f} is large, then f is rough; if \norm{f} is small, then f is smooth.

Associated to the RKHS \mathcal{H} is the titular reproducing kernel k. The kernel is a bivariate function k:\Omega\times\Omega\to\real. It is related to the RKHS inner product \langle\cdot,\cdot\rangle by the reproducing property

    \[f(x)=\langle f, k(x,\cdot)\rangle \quad \text{for every }f\in\mathcal{H},x\in\Omega.\]

Here, k(x,\cdot) represents the univariate function obtained by setting the first input of k to be x.

The Ideal Weights

To design a quadrature rule, we have to set the nodes s = (s_1,\ldots,s_n) \in \Omega^n and weights w = (w_1,\ldots,w_n)\in\real^n. Let’s first assume that the nodes s are fixed, and talk about how to pick the weights w.

There’s one choice of weights w^\star that we’ll called the ideal weights. There (at least) are five equivalent ways of characterizing the ideal weights. We’ll present all of them. As an exercise, you can try and convince yourself that these characterizations are equivalent, giving rise to the same weights.

Interpretation 1: Exactness

A standard way of designing quadrature rules is to make them exact (i.e., error-free) for some class of functions. For instance, many classical quadrature rules are exact for polynomials of degree up to n-1.

For kernel quadrature, it makes sense to design the quadrature rule to be exact for the kernel function at the selected nodes. That is, we require

    \[\hat{I}_{w_\star,s}[k(s_i,\cdot)]=I[k(s_i,\cdot)] \quad \text{for } i=1,2,\ldots,n.\]

Enforcing exactness gives us n linear equations for the n unknowns w^\star_1,\ldots,w^\star_n:

    \[\sum_{j=1}^n k(s_i,s_j)w^\star_j = \int_\Omega k(s_i,x) g(x)\,\mathrm{d}\mu(x) \quad \text{for }i=1,2,\ldots,n.\]

Under mild conditions, this system of linear equations is uniquely solvable, and the solution w^\star\in\real^n is the ideal weights.

Interpretation 2: Interpolate and Integrate

Here’s another very classical way of designing a quadrature rule. First, interpolate the function values (s_i,f(s_i)) at the nodes, obtaining an interpolant \hat{f}. Then, obtain an approximation to the integral by integrating the interpolant:

    \[\hat{I}_{w^\star,s}[f] \coloneqq \int_\Omega \hat{f}(x) g(x) \, \mathrm{d}\mu(x).\]


In our context, the appropriate interpolation method is kernel interpolation.1Kernel interpolation is also called Gaussian process regression or kriging though (confusingly) these terms can also refer to slightly different methods. It is the regularization-free limit of kernel ridge regression. The kernel interpolant is defined to be the minimum-norm function \hat{f} that interpolates the data:

    \[\hat{f} = \argmin \{ \norm{h} : h(s_i) = f(s_i) \text{ for } i=1,\ldots,n\}.\]

Remarkably, this infinite-dimensional problem has a tractably computable solution. In fact, \hat{f} is the unique function of the form

    \[\hat{f} = \sum_{i=1}^n \alpha_i k(\cdot,s_i)\]

that agrees with f on the points s_1,\ldots,s_n.With a little algebra, you can show that the integral of \hat{f} is

    \[I[\hat{f}] = \sum_{i=1}^n w^\star_i f(s_i),\]

where w^\star are the ideal weights.

Interpretation 3: Minimizing the Worst-Case Error

Define the worst-case error of weights w and nodes s to be

    \[\operatorname{Err}(w,s)=\sup_{\norm{f}\le 1}\left| I[f] - \hat{I}_{w,s}[f]\right|.\]

The quantity \operatorname{Err}(w,s) is the highest possible quadrature error for a function f\in\mathcal{H} of norm at most 1.

Having defined the worst-case error, the ideal weights are precisely the weights that minimize this quantity

    \[w^\star=\operatorname*{argmin}_{w\in\real^n}\operatorname{Err}(w,s).\]

Interpretation 4: Minimizing the Mean-Square Error

The next two interpretations of the ideal weights will adopt a probabilistic framing. A Gaussian process is a random function f such that f’s values at any collection of points are (jointly) Gaussian random variables. We write f\sim \operatorname{GP}(0,k) for a mean-zero Gaussian process with covariance function k:

    \[\Cov(f(x),f(y))=k(x,y)\quad \text{for every } x,y\in\Omega.\]


Define the mean-square quadrature error of weights w and nodes s to be

    \[\operatorname{MSE}(w,s)\coloneqq \expect_{f\sim\operatorname{GP}(0,k)} \left( I[f] - \hat{I}_{w,s}[f] \right)^2.\]

The mean-square error reports the expected squared quadrature error over all functions f drawn from a Gaussian process with covariance function k.

Pleasantly, the mean-square error is equal ro the square of the worst-case error

    \[\operatorname{MSE}(w,s)=\operatorname{Err}(w,s)^2.\]

As such, the ideal weights also minimize the mean-square error

    \[w^\star=\operatorname*{argmin}_{w\in\real^n}\operatorname{MSE}(w,s).\]

Interpretation 5: Conditional Expectation

For our last interpretation, again consider a Gaussian process h\sim \operatorname{GP}(0,k). The integral of this random function I[h] is a random variable. To numerically integrate a function f, compute the expectation of I[h] conditional on h agreeing with f at the quadrature nodes:

    \[\hat{I}_{w^\star,s}[f]\coloneqq \expect_{h\sim\operatorname{GP}(0,k)}[I[h] \mid h(s_i)=f(s_i) \text{ for } i=1,\ldots,n].\]

One can show that this procedure yields the quadrature scheme with the ideal weights.

Conclusion

We’ve just seen five sensible ways of defining the ideal weights for quadrature in a general reproducing kernel Hilbert space. Remarkably, all five lead to exactly the same choice of weights. To me, these five equivalent characterizations give me more confidence that the ideal weights really are the “right” or “natural” choice for kernel quadrature.

That said, there are other reasonable requirements that we might want to impose on the weights. For instance, if \mu is a probability measure and g\equiv 1, it is reasonable to add an additional constraint that the weights w lie in the probability simplex

    \[w\in\Delta\coloneqq \left\{ p\in\real^n_+ : \sum_{i=1}^n p_i = 1\right\}.\]

With this additional stipulation, a quadrature rule can be interpreted as integrating f against a discrete probability measure \ \hat{\mu}=\sum_{i=1}^n w_i\delta_{s_i}; thus, in effect, quadrature amounts to approximating one probability measure \mu by another \hat{\mu}. Additional constraints such as these can easily be imposed when using the optimization characterizations 3 and 4 of the ideal weights. See this paper for details.

What About the Nodes?

We’ve spent a lot of time talking about how to pick the quadrature weights, but how should we pick the nodes s\in\Omega^n? To pick the nodes, it seems sensible to try and minimize the worst-case error \operatorname{Err}(w^\star,s) with the ideal weights w^\star. For this purpose, we can use the following formula:

    \[\operatorname{Err}(w^\star,s) = \norm{\int_\Omega (k(\cdot,x) - \hat{k}_s(\cdot,x)) g(x) \, \mathrm{d}\mu(x)}.\]

Here, \hat{k}_s is the Nyström approximation to the kernel k induced by the nodes s, defined to be

    \[\hat{k}_s(x,y) = k(x,s) k(s,s)^{-1} k(s,y).\]

We have written k(s,s) for the kernel matrix with ij entry k(s_i,s_j) and k(x,s) and k(s,y) for the row and column vectors with ith entry k(x,s_i) and k(s_i,y).

I find the appearance of the Nyström approximation in this context to be surprising and delightful. Previously on this blog, we’ve seen (column) Nyström approximation in the context of matrix low-rank approximation. Now, a continuum analog of the matrix Nyström approximation has appeared in the error formula for numerical integration.

The appearance of the Nyström approximation in the kernel quadrature error \operatorname{Err}(w^\star,s) also suggests a strategy for picking the nodes.

Node selection strategy. We should pick the nodes s to make the Nyström approximation \hat{k}_s \approx k as accurate as possible.

The closer \hat{k}_s is to k, the smaller the function k(\cdot,x) - \hat{k}_s(\cdot,x) is and, thus, the smaller the error

    \[\operatorname{Err}(w^\star,s) = \norm{\int_\Omega (k(\cdot,x) - \hat{k}_s(\cdot,x)) g(x) \, \mathrm{d}\mu(x)}.\]

Fortunately, we have randomized matrix algorithms for picking good nodes for matrix Nyström approximation such as randomly pivoted Cholesky, ridge leverage score sampling, and determinantal point process sampling; maybe these matrix tools can be ported to the continuous kernel setting?

Indeed, all three of these algorithms—randomly pivoted Cholesky, ridge leverage score sampling, and determinantal point process sampling—have been studied for kernel quadrature. The first of these algorithms, randomly pivoted Cholesky, is the subject of our paper. We show that this simple, adaptive sampling method produces excellent nodes for kernel quadrature. Intuitively, randomly pivoted Cholesky is effective because it is repulsive: After having picked nodes s_1,\ldots,s_i, it has a high probability of placing the next node s_{i+1} far from the previously selected nodes.

The following image shows 20 nodes selected by randomly pivoted Cholesky in a crescent-shaped region. The cyan–pink shading denotes the probability distribution for picking the next node. We see that the center of the crescent does not have any nodes, and thus is most likely to receive a node during the next round of sampling.

In our paper, we demonstrate—empirically and theoretically—that randomly pivoted Cholesky produces excellent nodes for quadrature. We also discuss efficient rejection sampling algorithms for sampling nodes with the randomly pivoted Cholesky distribution. Check out the paper for details!

Which Sketch Should I Use?

This is the second of a sequence of two posts on sketching, which I’m doing on the occasion of my new paper on the numerical stability of the iterative sketching method. For more on what sketching is and how it can be used to solve computational problems, I encourage you to check out the first post.

The goals of this post are more narrow. I seek to answer the question:

Which sketching matrix should I use?

To cut to the chase, my answer to this question is:

Sparse sign embeddings are a sensible default for sketching.

There are certainly cases when sparse sign embeddings are not the best type of sketch to use, but I hope to convince you that they’re a good sketching matrix to use for most purposes.

Experiments

Let’s start things off with some numerical experiments.1Code for all numerical experiments can be found on the blogpost branch of the Github for my recent paper. We’ll compare three types of sketching matrices: Gaussian embeddings, a subsampled randomized trigonometric transform (SRTT), and sparse sign embeddings. See the last post for descriptions of these sketching matrices. I’ll discuss a few additional types of sketching matrices that require more discussion at the end of this post.

Recall that a sketching matrix S \in \real^{d\times n} seeks to compress a high-dimensional matrix A \in \real^{n\times k} or vector b\in\real^n to a lower-dimensional sketched matrix SA or vector Sb. The quality of a sketching matrix for a matrix A is measured by its distortion \varepsilon, defined to be the smallest number \varepsilon > 0 such that

    \[(1-\varepsilon) \norm{x} \le \norm{Sx} \le (1+\varepsilon) \norm{x} \quad \text{for every } x \in \operatorname{col}(A).\]

Here, \operatorname{col}(A) denotes the column space of the matrix A.

Timing

We begin with timing test. We will measure three different times for each embedding:

  1. Construction. The time required to generate the sketching matrix S.
  2. Vector apply. The time to apply the sketch to a single vector.
  3. Matrix apply. The time to apply the sketch to an n\times 200 matrix.

We will test with input dimension n = 10^6 (one million) and output dimension d = 400. For the SRTT, we use the discrete cosine transform as our trigonometric transform. For the sparse sign embedding, we use a sparsity parameter \zeta = 8.

Here are the results (timings averaged over 20 trials):

Our conclusions are as follows:

  • Sparse sign embeddings are definitively the fastest to apply, being 3–20× faster than the SRTT and 74–100× faster than Gaussian embeddings.
  • Sparse sign embeddings are modestly slower to construct than the SRTT, but much faster to construct than Gaussian embeddings.

Overall, the conclusion is that sparse sign embeddings are the fastest sketching matrices by a wide margin: For an “end-to-end” workflow involving generating the sketching matrix S \in \real^{400\times 10^6} and applying it to a matrix A\in\real^{10^6\times 200}, sparse sign embeddings are 14× faster than SRTTs and 73× faster than Gaussian embeddings.2More timings are reported in Table 1 of this paper, which I credit for inspiring my enthusiasm for the sparse sign embedding l.

Distortion

Runtime is only one measure of the quality of a sketching matrix; we also must care about the distortion \varepsilon. Fortunately, for practical purposes, Gaussian embeddings, SRTTs, and sparse sign embeddings all tend to have similar distortions. Therefore, we are free to use the sparse sign embeddings, as they as typically are the fastest.

Consider the following test. We generate a sparse random test matrix A of size n\times k for n = 10^5 and k = 50 using the MATLAB sprand function; we set the sparsity level to 1%. We then compare the distortion of Gaussian embeddings, SRTTs, and sparse sign embeddings across a range of sketching dimensions d between 100 and 10,000. We report the distortion averaged over 100 trials. The theoretically predicted value \varepsilon = \sqrt{k/d} (equivalently, d=k/\varepsilon^2) is shown as a dashed line.

To me, I find these results remarkable. All three embeddings exhibit essentially the same distortion parameter predicted by the Gaussian theory.

It would be premature to declare success having only tested on one type of test matrix A. Consider the following four test matrices:

  • Sparse: The test matrix from above.
  • Dense: A\in\real^{10^6\times 50} is taken to be a matrix with independent standard Gaussian random values.
  • Khatri–Rao: A\in\real^{50^3\times 50} is taken to be the Khatri–Rao product of three Haar random orthogonal matrices.
  • Identity: A = \twobyone{I}{0} \in\real^{10^6\times 50} is taken to be the 50\times 50 identity matrix stacked onto a (10^6-50)\times 50 matrix of zeros.

The performance of sparse sign embeddings (again with sparsity parameter \zeta = 8) is shown below:

We see that for the first three test matrices, the performance closely follows the expected value \epsilon = \sqrt{k/d}. However, for the last test matrix “Identity”, we see the distortion begins to slightly exceed this predicted distortion for d/k\ge 20.

To improve sparse sign embeddings for higher values of d/k, we can increase the value of the sparsity parameter \zeta. We recommend

    \[\zeta = \max \left( 8 , \left\lceil 2\sqrt{\frac{d}{k}} \right\rceil \right).\]

With this higher sparsity level, the distortion closely tracks \varepsilon = \sqrt{k/d} for all four test matrices:

Conclusion

Implemented appropriately (see below), sparse sign embeddings can be faster than other sketching matrices by a wide margin. The parameter choice \zeta = 8 is enough to ensure the distortion closely tracks \varepsilon = \sqrt{k/d} for most test matrices. For the toughest test matrices, a slightly larger sparsity parameter \zeta = \max(8, \lceil 2\sqrt{d/k}\rceil) can be necessary to achieve the optimal distortion.

While these tests are far from comprehensive, they are consistent with the uniformly positive results for sparse sign embeddings reported in the literature. We believe that this evidence supports the argument that sparse sign embeddings are a sensible default sketching matrix for most purposes.

Sparse Sign Embeddings: Theory and Practice

Given the highly appealing performance characteristics of sparse sign embeddings, it is worth saying a few more words about these embeddings and how they perform in both theory and practice.

Recall that a sparse sign embedding is a random matrix of the form

    \[S = \frac{1}{\sqrt{\zeta}} \begin{bmatrix} s_1 & \cdots & s_n \end{bmatrix}.\]

Each column s_i is an independent and randomly generated to contain exactly \zeta nonzero entries in uniformly random positions. The value of each nonzero entry of s_i is chosen to be either +1 or -1 with 50/50 odds.

Parameter Choices

The goal of sketching is to reduce vectors of length n to a smaller dimension d. For linear algebra applications, we typically want to preserve all vectors in the column space of a matrix A up to distortion \varepsilon > 0:

    \[(1-\varepsilon) \norm{x} \le \norm{Sx} \le (1+\varepsilon) \norm{x} \quad \text{for all }x \in \operatorname{col}(A).\]

To use sparse sign embeddings, we must choose the parameters appropriately:

Given a dimension k and a target distortion \varepsilon, how do we pick d and \zeta?

Based on the experiments above (and other testing reported in the literature), we recommend the following parameter choices in practice:

    \[d = \frac{k}{\varepsilon^2} \quad \text{and} \quad \zeta = \max\left(8,\frac{2}{\varepsilon}\right).\]

The parameter choice \zeta = 8 is advocated by Tropp, Yurtever, Udell, and Cevher; they mention experimenting with parameter values as small as \zeta = 2. The value \zeta = 1 has demonstrated deficiencies and should almost always be avoided (see below). The scaling d \approx k/\varepsilon^2 is derived from the analysis of Gaussian embeddings. As Martinsson and Tropp argue, the analysis of Gaussian embeddings tends to be reasonably descriptive of other well-designed random embeddings.

The best-known theoretical analysis, due to Cohen, suggests more cautious parameter setting for sparse sign embeddings:

    \[d = \mathcal{O} \left( \frac{k \log k}{\varepsilon^2} \right) \quad \text{and} \quad \zeta = \mathcal{O}\left( \frac{\log k}{\varepsilon} \right).\]

The main difference between Cohen’s analysis and the parameter recommendations above is the presence of the \log k factor and the lack of explicit constants in the O-notation.

Implementation

For good performance, it is imperative to store S using either a purpose-built data structure or a sparse matrix format (such as a MATLAB sparse matrix or scipy sparse array).

If a sparse matrix library is unavailable, then either pursue a dedicated implementation or use a different type of embedding; sparse sign embeddings are just as slow as Gaussian embeddings if they are stored in an ordinary non-sparse matrix format!

Even with a sparse matrix format, it can require care to generate and populate the random entries of the matrix S. Here, for instance, is a simple function for generating a sparse sign matrix in MATLAB:

function S = sparsesign_slow(d,n,zeta)
cols = kron((1:n)',ones(zeta,1)); % zeta nonzeros per column
vals = 2*randi(2,n*zeta,1) - 3; % uniform random +/-1 values
rows = zeros(n*zeta,1);
for i = 1:n
   rows((i-1)*zeta+1:i*zeta) = randsample(d,zeta);
end
S = sparse(rows, cols, vals / sqrt(zeta), d, n);
end

Here, we specify the rows, columns, and values of the nonzero entries before assembling them into a sparse matrix using the MATLAB sparse command. Since there are exactly \zeta nonzeros per column, the column indices are easy to generate. The values are uniformly \pm 1/\sqrt{\zeta} and can also be generated using a single line. The real challenge to generating sparse sign embeddings in MATLAB is the row indices, since each batch of \zeta row indices much be chosen uniformly at random between 1 and d without replacement. This is accomplished in the above code by a for loop, generating row indices \zeta at a time using the slow randsample function.

As its name suggests, the sparsesign_slow is very slow. To generate a 200\times 10^7 sparse sign embedding with sparsity \zeta = 8 requires 53 seconds!

Fortunately, we can do (much) better. By rewriting the code in C and directly generating the sparse matrix in the CSC format MATLAB uses, generating this same 200 by 10 million sparse sign embedding takes just 0.4 seconds, a speedup of 130× over the slow MATLAB code. A C implementation of the sparse sign embedding that can be used in MATLAB using the MEX interface can be found in this file in the Github repo for my recent paper.

Other Sketching Matrices

Let’s leave off the discussion by mentioning other types of sketching matrices not considered in the empirical comparison above.

Coordinate Sampling

Another family of sketching matrices that we haven’t talked about are coordinate sampling sketches. A coordinate sampling sketch consists of indices 1\le i_1,\ldots,i_d\le n and weights w_1,\ldots,w_d \in \real. To apply S, we sample the indices i_1,\ldots,i_d and reweight them using the weights:

    \[b \in \real^n \longmapsto Sb = (w_1 b_{i_1},\ldots,w_db_{i_d}) \in \real^d.\]

Coordinate sampling is very appealing: To apply S to a matrix or vector requires no matrix multiplication of trigonometric transforms, just picking out some entries or rows and rescaling them.

In order for coordinate sampling to be effective, we need to pick the right indices. Below, we compare two coordinate sampling sketching approaches, uniform sampling and leverage score sampling (both with replacement), to the sparse sign embedding with the suggested parameter setting \zeta = \max(8,\lceil 2\sqrt{d/k}\rceil) for the hard “Identity” test matrix used above.

We see right away that the uniform sampling fails dramatically on this problem. That’s to be expected. All but 50 of 100,000 rows of A are zero, so picking rows uniformly at random will give nonsense with very high probability. Uniform sampling can work well for matrices A which are “incoherent”, with all rows of A being of “similar importance”.

Conclusion (Uniform sampling). Uniform sampling is a risky method; it works excellently for some problems, but fails spectacularly for others. Use only with caution!

The ridge leverage score sampling method is more interesting. Unlike all the other sketches we’ve discussed in this post, ridge leverage score sampling is data-dependent. First, it computes a leverage score \ell_i for each row of A and then samples rows with probabilities proportional to these scores. For high enough values of d, ridge leverage score sampling performs slightly (but only slightly) worse than the characteristic \varepsilon = \sqrt{k/d} scaling we expect for an oblivious subspace embedding.

Ultimately, leverage score sampling has two disadvantages when compared with oblivious sketching matrices:

  • Higher distortion, higher variance. The distortion of a leverage score sketch is higher on average, and more variable, than an oblivious sketch, which achieve very consistent performance.
  • Computing the leverage scores. In order to implement this sketch, the leverage scores \ell_i have to first be computed or estimated. This is a nontrivial algorithmic problem; the most direct way of computing the leverage scores requires a QR decomposition at \mathcal{O}(nk^2) cost, much higher than other types of sketches.

There are settings when coordinate sampling methods, such as leverage scores, are well-justified:

  • Structured matrices. For some matrices A, the leverage scores might be very cheap to compute or approximate. In such cases, coordinate sampling can be faster than oblivious sketching.
  • “Active learning”. For some problems, each entry of the vector b or row of the matrix A may be expensive to generate. In this case, coordinate sampling has the distinct advantage that computing Sb or SA only requires generating the entries of b or rows of A for the d randomly selected indices i_1,\ldots,i_d.

Ultimately, oblivious sketching and coordinate sampling both have their place as tools in the computational toolkit. For the reasons described above, I believe that oblivious sketching should usually be preferred to coordinate sampling in the absence of a special reason to prefer the latter.

Tensor Random Embeddings

There are a number of sketching matrices with tensor structure; see here for a survey. These types of sketching matrices are very well-suited to tensor computations. If tensor structure is present in your application, I would put these types of sketches at the top of my list for consideration.

CountSketch

The CountSketch sketching matrix is the \zeta = 1 case of the sparse sign embedding. CountSketch has serious deficiencies, and should only be used in practice with extreme care.

Consider the “Identity” test matrix from above but with parameter k = 200, and compare the distortion of CountSketch to the sparse sign embedding with parameters \zeta=2,4,8:

We see that the distortion of the CountSketch remains persistently high at 100% until the sketching dimension d is taken >4300, 20× higher than k.

CountSketch is bad because it requires d to be proportional to k^2/\varepsilon^2 in order to achieve distortion \varepsilon. For all of the other sketching matrices we’ve considered, we’ve only required d to be proportional to k/\varepsilon^2 (or perhaps (k\log k)/\varepsilon^2). This difference between d\propto k^2 for CountSketch and d\propto k for other sketching matrices is a at the root of CountSketch’s woefully bad performance on some inputs.3Here, the symbol \propto is an informal symbol meaning “proportional to”.

The fact that CountSketch requires d\propto k^2 is simple to show. It’s basically a variant on the famous birthday problem. We include a discussion at the end of this post.4In fact, any oblivious sketching matrix with only 1 nonzero entry per column must have d\gtrsim k^2. This is Theorem 16 in the following paper.

There are ways of fixing the CountSketch. For instance, we can use a composite sketch S = S_2 \cdot S_1, where S_1 is a CountSketch of size k^2/\varepsilon^2 \times n and S_2 is a Gaussian sketching matrix of size k/\varepsilon^2 \times k^2/\varepsilon^2.5This construction is from this paper. For most applications, however, salvaging CountSketch doesn’t seem worth it; sparse sign embeddings with even \zeta = 2 nonzeros per column are already way more effective and reliable than a plain CountSketch.

Conclusion

By now, sketching is quite a big field, with dozens of different proposed constructions for sketching matrices. So which should you use? For most use cases, sparse sign embeddings are a good choice; they are fast to construct and apply and offer uniformly good distortion across a range of matrices.

Bonus: CountSketch and the Birthday Problem
The point of this bonus section is to prove the following (informal) theorem:

Let A be the “Identity” test matrix above. If S\in\real^{d\times n} is a CountSketch matrix with output dimension d\ll k^2, then the distortion of S for \operatorname{col}(A) is \varepsilon\ge 1 with high probability.

Let’s see why. By the structure of the matrix A, SA has the form

    \[SA = \begin{bmatrix} s_1 & \cdots & s_k \end{bmatrix}\]

where each vector s_i\in\real^d has a single \pm1 in a uniformly random location j\_i.

Suppose that the indices j_1,\ldots,j_k are not all different from each other, say j_i = j_{i'}. Set x = e_i - e_{i'}, where e_i is the standard basis vector with 1 in position i and zeros elsewhere. Then, (SA)x = 0 but \norm{x} = \sqrt{2}. Thus, for the distortion relation

    \[(1-\varepsilon) \norm{x} =(1-\varepsilon)\sqrt{2} \le 0 = \norm{(SA)x}\]

to hold, \varepsilon \ge 1. Thus,

    \[\prob \{ \varepsilon \ge 1 \} \ge \prob \{ j_1,\ldots,j_k \text{ are not distinct} \}.\]

For a moment, let’s put aside our analysis of the CountSketch, and turn our attention to a famous puzzle, known as the birthday problem:

How many people have to be in a room before there’s at least a 50% chance that two people share the same birthday?

The counterintuitive or “paradoxical” answer: 23. This is much smaller than many people’s intuition, as there are 365 possible birthdays and 23 is much smaller than 365.

The reason for this surprising result is that, in a room of 23 people, there are {23 \choose 2} = 23\cdot 22/2=253 pairs of people. Each pair of people has a 1/365 chance of sharing a birthday, so the expected number of birthdays in a room of 23 people is 253/365 \approx 0.69. Since are 0.69 birthdays shared on average in a room of 23 people, it is perhaps less surprising that 23 is the critical number at which the chance of two people sharing a birthday exceeds 50%.

Hopefully, the similarity between the birthday problem and CountSketch is becoming clear. Each pair of indices j_i and j_{i'} in CountSketch have a 1/d chance of being the same. There are {k\choose 2} \approx k^2/2 pairs of indices, so the expected number of equal indices j_i = j_{i'} is \approx k^2/2d. Thus, we should anticipate d \gtrapprox k^2 is required to ensure that j_1,\ldots,j_k are distinct with high probability.

Let’s calculate things out a bit more precisely. First, realize that

    \[\prob \{ j_1,\ldots,j_k \text{ are not distinct} \} = 1 - \prob \{ j_1,\ldots,j_k \text{ are distinct} \}.\]

To compute the probability that j_1,\ldots,j_k are distinct, imagine introducing each j_i one at a time. Assuming that j_1,\ldots,j_{i-1} are all distinct, the probability j_1,\ldots,j_i are distinct is just the probability that j_i does not take any of the i-1 values j_1,\ldots,j_i. This probability is

    \[\prob\{ j_1,\ldots,j_i \text{ are distinct} \mid j_1,\ldots,j_{i-1} \text{ are distinct}\} = 1 - \frac{i-1}{d}.\]

Thus, by the chain rule for probability,

    \[\prob \{ j_1,\ldots,j_k \text{ are distinct} \} = \prod_{i=1}^k \left(1 - \frac{i-1}{d} \right).\]

To bound this quantity, use the numeric inequality 1-x\le \exp(-x) for every x \in \real, obtaining

    \[\mathbb{P} \{ j_1,\ldots,j_k \text{ are distinct} \} \le \prod_{i=0}^{k-1} \exp\left(-\frac{i}{d}\right) = \exp \left( -\frac{1}{d}\sum_{i=0}^{k-1} i \right) = \exp\left(-\frac{k(k-1)}{2d}\right).\]

Thus, we conclude that

    \[\prob \{ \varepsilon \ge 1 \} \ge 1-\prob \{ j_1,\ldots,j_k \text{ are distinct} \\}\ge 1-\exp\left(-\frac{k(k-1)}{2d}\right).\]

Solving this inequality, we conclude that

    \[\prob\{\varepsilon \ge 1\} \ge \frac{1}{2} \quad \text{if} \quad d \le \frac{k(k-1)}{2\ln 2}.\]

This is a quantitative version of our informal theorem from earlier.

Does Sketching Work?

I’m excited to share that my paper, Fast and forward stable randomized algorithms for linear least-squares problems has been released as a preprint on arXiv.

With the release of this paper, now seemed like a great time to discuss a topic I’ve been wanting to write about for a while: sketching. For the past two decades, sketching has become a widely used algorithmic tool in matrix computations. Despite this long history, questions still seem to be lingering about whether sketching really works:

In this post, I want to take a critical look at the question “does sketching work”? Answering this question requires answering two basic questions:

  1. What is sketching?
  2. What would it mean for sketching to work?

I think a large part of the disagreement over the efficacy of sketching boils down to different answers to these questions. By considering different possible answers to these questions, I hope to provide a balanced perspective on the utility of sketching as an algorithmic primitive for solving linear algebra problems.

Sketching

In matrix computations, sketching is really a synonym for (linear) dimensionality reduction. Suppose we are solving a problem involving one or more high-dimensional vectors b \in \real^n or perhaps a tall matrix A\in \real^{n\times k}. A sketching matrix is a d\times n matrix S \in \real^{d\times n} where d \ll n. When multiplied into a high-dimensional vector b or tall matrix A, the sketching matrix S produces compressed or “sketched” versions Sb and SA that are much smaller than the original vector b and matrix A.

Let \mathsf{E}=\{x_1,\ldots,x_p\} be a collection of vectors. For S to be a “good” sketching matrix for \mathsf{E}, we require that S preserves the lengths of every vector in \mathsf{E} up to a distortion parameter \varepsilon>0:

(1)   \[(1-\varepsilon) \norm{x}\le\norm{Sx}\le(1+\varepsilon)\norm{x} \]

for every x in \mathsf{E}.

For linear algebra problems, we often want to sketch a matrix A. In this case, the appropriate set \mathsf{E} that we want our sketch to be “good” for is the column space of the matrix A, defined to be

    \[\operatorname{col}(A) \coloneqq \{ Ax : x \in \real^k \}.\]

Remarkably, there exist many sketching matrices that achieve distortion \varepsilon for \mathsf{E}=\operatorname{col}(A) with an output dimension of roughly d \approx k / \varepsilon^2. In particular, the sketching dimension d is proportional to the number of columns k of A. This is pretty neat! We can design a single sketching matrix S which preserves the lengths of all infinitely-many vectors Ax in the column space of A.

Sketching Matrices

There are many types of sketching matrices, each with different benefits and drawbacks. Many sketching matrices are based on randomized constructions in the sense that entries of S are chosen to be random numbers. Broadly, sketching matrices can be classified into two types:

  • Data-dependent sketches. The sketching matrix S is constructed to work for a specific set of input vectors \mathsf{E}.
  • Oblivious sketches. The sketching matrix S is designed to work for an arbitrary set of input vectors \mathsf{E} of a given size (i.e., \mathsf{E} has p elements) or dimension (\mathsf{E} is a k-dimensional linear subspace).

We will only discuss oblivious sketching for this post. We will look at three types of sketching matrices: Gaussian embeddings, subsampled randomized trignometric transforms, and sparse sign embeddings.

The details of how these sketching matrices are built and their strengths and weaknesses can be a little bit technical. All three constructions are independent from the rest of this article and can be skipped on a first reading. The main point is that good sketching matrices exist and are fast to apply: Reducing b\in\real^n to Sb\in\real^{d} requires roughly \mathcal{O}(n\log n) operations, rather than the \mathcal{O}(dn) operations we would expect to multiply a d\times n matrix and a vector of length n.1Here, \mathcal{O}(\cdot) is big O notation.

Gaussian Embeddings

The simplest type of sketching matrix S\in\real^{d\times n} is obtained by (independently) setting every entry of S to be a Gaussian random number with mean zero and variance 1/d. Such a sketching matrix is called a Gaussian embedding.2Here, embedding is a synonym for sketching matrix.

Benefits. Gaussian embeddings are simple to code up, requiring only a standard matrix product to apply to a vector Sb or matrix SA. Gaussian embeddings admit a clean theoretical analysis, and their mathematical properties are well-understood.

Drawbacks. Computing Sb for a Gaussian embedding costs \mathcal{O}(dn) operations, significantly slower than the other sketching matrices we will consider below. Additionally, generating and storing a Gaussian embedding can be computationally expensive.

Subsampled Randomized Trigonometric Transforms

The subsampled randomized trigonometric transform (SRTT) sketching matrix takes a more complicated form. The sketching matrix is defined to be a scaled product of three matrices

    \[S = \sqrt{\frac{n}{d}} \cdot R \cdot F \cdot D.\]

These matrices have the following definitions:

  • D\in\real^{n\times n} is a diagonal matrix whose entries are each a random \pm 1 (chosen independently with equal probability).
  • F\in\real^{n\times n} is a fast trigonometric transform such as a fast discrete cosine transform.3One can also use the ordinary fast Fourier transform, but this results in a complex-valued sketch.
  • R\in\real^{d\times n} is a selection matrix. To generate R, let i_1,\ldots,i_d be a random subset of \{1,\ldots,n\}, selected without replacement. R is defined to be a matrix for which Rb = (b_{i_1},\ldots,b_{i_d}) for every vector b.

To store S on a computer, it is sufficient to store the n diagonal entries of D and the d selected coordinates i_1,\ldots,i_d defining R. Multiplication of S against a vector b should be carried out by applying each of the matrices R, F, and D in sequence, such as in the following MATLAB code:

% Generate randomness for S
signs = 2*randi(2,m,1)-3; % diagonal entries of D
idx = randsample(m,d); % indices i_1,...,i_d defining R

% Multiply S against b
c = signs .* b % multiply by D
c = dct(c) % multiply by F
c = c(idx) % multiply by R
c = sqrt(n/d) * c % scale

Benefits. S can be applied to a vector b in \mathcal{O}(n \log n) operations, a significant improvement over the \mathcal{O}(dn) cost of a Gaussian embedding. The SRTT has the lowest memory and random number generation requirements of any of the three sketches we discuss in this post.

Drawbacks. Applying S to a vector requires a good implementation of a fast trigonometric transform. Even with a high-quality trig transform, SRTTs can be significantly slower than sparse sign embeddings (defined below).4For an example, see Figure 2 in this paper. SRTTs are hard to parallelize.5Block SRTTs are more parallelizable, however. In theory, the sketching dimension should be chosen to be d \approx (k\log k)/\varepsilon^2, larger than for a Gaussian sketch.

Sparse Sign Embeddings

A sparse sign embedding takes the form

    \[S = \frac{1}{\sqrt{\zeta}} \begin{bmatrix} s_1 & s_2 & \cdots & s_n \end{bmatrix}.\]

Here, each column s_i is an independently generated random vector with exactly \zeta nonzero entries with random \pm 1 values in uniformly random positions. The result is a d\times n matrix with only \zeta \cdot n nonzero entries. The parameter \zeta is often set to a small constant like 8 in practice.6This recommendation comes from the following paper, and I’ve used this parameter setting successfully in my own work.

Benefits. By using a dedicated sparse matrix library, S can be very fast to apply to a vector b (either \mathcal{O}(n) or \mathcal{O}(n\log k) operations) to apply to a vector, depending on parameter choices (see below). With a good sparse matrix library, sparse sign embeddings are often the fastest sketching matrix by a wide margin.

Drawbacks. To be fast, sparse sign embeddings requires a good sparse matrix library. They require generating and storing roughly \zeta n random numbers, higher than SRTTs (roughly n numbers) but much less than Gaussian embeddings (dn numbers). In theory, the sketching dimension should be chosen to be d \approx (k\log k)/\varepsilon^2 and the sparsity should be set to \zeta \approx (\log k)/\varepsilon; the theoretically sanctioned sketching dimension (at least according to existing theory) is larger than for a Gaussian sketch. In practice, we can often get away with using d \approx k/\varepsilon^2 and \zeta=8.

Summary

Using either SRTTs or sparse maps, a sketching a vector of length n down to d dimensions requires only \mathcal{O}(n) to \mathcal{O}(n\log n) operations. To apply a sketch to an entire n\times k matrix A thus requires roughly \mathcal{O}(nk) operations. Therefore, sketching offers the promise of speeding up linear algebraic computations involving A, which typically take \mathcal{O}(nk^2) operations.

How Can You Use Sketching?

The simplest way to use sketching is to first apply the sketch to dimensionality-reduce all of your data and then apply a standard algorithm to solve the problem using the reduced data. This approach to using sketching is called sketch-and-solve.

As an example, let’s apply sketch-and-solve to the least-squares problem:

(2)   \[\operatorname*{minimize}_{x\in\real^k} \norm{Ax - b}. \]

We assume this problem is highly overdetermined with A having many more rows n than columns m.

To solve this problem with sketch-and-solve, generate a good sketching matrix S for the set \mathsf{E} = \operatorname{col}(\onebytwo{A}{b}). Applying S to our data A and b, we get a dimensionality-reduced least-squares problem

(3)   \[\operatorname*{minimize}_{\hat{x}\in\real^k} \norm{(SA)\hat{x} - Sb}. \]

The solution \hat{x} is the sketch-and-solve solution to the least-squares problem, which we can use as an approximate solution to the original least-squares problem.

Least-squares is just one example of the sketch-and-solve paradigm. We can also use sketching to accelerate other algorithms. For instance, we could apply sketch-and-solve to clustering. To cluster data points x_1,\ldots,x_p, first apply sketching to obtain Sx_1,\ldots,Sx_p and then apply an out-of-the-box clustering algorithms like k-means to the sketched data points.

Does Sketching Work?

Most often, when sketching critics say “sketching doesn’t work”, what they mean is “sketch-and-solve doesn’t work”.

To address this question in a more concrete setting, let’s go back to the least-squares problem (2). Let x_\star denote the optimal least-squares solution and let \hat{x} be the sketch-and-solve solution (3). Then, using the distortion condition (1), one can show that

    \[\norm{A\hat{x} - b} \le \frac{1+\varepsilon}{1-\varepsilon} \norm{Ax - b}.\]

If we use a sketching matrix with a distortion of \varepsilon = 1/3, then this bound tells us that

(4)   \[\norm{A\hat{x} - b} \le 2\norm{Ax_\star - b}. \]

Is this a good result or a bad result? Ultimately, it depends. In some applications, the quality of a putative least-squares solution \hat{x} is can be assessed from the residual norm \norm{A\hat{x} - b}. For such applications, the bound (4) ensures that \norm{A\hat{x} - b} is at most twice \norm{Ax_\star-b}. Often, this means \hat{x} is a pretty decent approximate solution to the least-squares problem.

For other problems, the appropriate measure of accuracy is the so-called forward error \norm{\hat{x} - x_\star}, measuring how close \hat{x} is to x_\star. For these cases, it is possible that \norm{\hat{x} - x_\star} might be large even though the residuals are comparable (4).

Let’s see an example, using the MATLAB code from my paper:

[A, b, x, r] = random_ls_problem(1e4, 1e2, 1e8, 1e-4); % Random LS problem
S = sparsesign(4e2, 1e4, 8); % Sparse sign embedding
sketch_and_solve = (S*A) \ (S*b); % Sketch-and-solve
direct = A \ b; % MATLAB mldivide

Here, we generate a random least-squares problem of size 10,000 by 100 (with condition number 10^8 and residual norm 10^{-4}). Then, we generate a sparse sign embedding of dimension d = 400 (corresponding to a distortion of roughly \varepsilon \approx 1/2). Then, we compute the sketch-and-solve solution and, as reference, a “direct” solution by MATLAB’s \.

We compare the quality of the sketch-and-solve solution to the direct solution, using both the residual and forward error:

fprintf('Residuals: sketch-and-solve %.2e, direct %.2e, optimal %.2e\n',...
           norm(b-A*sketch_and_solve), norm(b-A*direct), norm(r))
fprintf('Forward errors: sketch-and-solve %.2e, direct %.2e\n',...
           norm(x-sketch_and_solve), norm(x-direct))

Here’s the output:

Residuals: sketch-and-solve 1.13e-04, direct 1.00e-04, optimal 1.00e-04
Forward errors: sketch-and-solve 1.06e+03, direct 8.08e-07

The sketch-and-solve solution has a residual norm of 1.13\times 10^{-4}, close to direct method’s residual norm of 1.00\times 10^{-4}. However, the forward error of sketch-and-solve is 1\times 10^3 nine orders of magnitude larger than the direct method’s forward error of 8\times 10^{-7}.

Does sketch-and-solve work? Ultimately, it’s a question of what kind of accuracy you need for your application. If a small-enough residual is all that’s needed, then sketch-and-solve is perfectly adequate. If small forward error is needed, sketch-and-solve can be quite bad.

One way sketch-and-solve can be improved is by increasing the sketching dimension d and lowering the distortion \varepsilon. Unfortunately, improving the distortion of the sketch is expensive. Because of the relation d \approx k /\varepsilon^2, to decrease the distortion by a factor of ten requires increasing the sketching dimension d by a factor of one hundred! Thus, sketch-and-solve is really only appropriate when a low degree of distortion \varepsilon is necessary.

Iterative Sketching: Combining Sketching with Iteration

Sketch-and-solve is a fast way to get a low-accuracy solution to a least-squares problem. But it’s not the only way to use sketching for least-squares. One can also use sketching to obtain high-accuracy solutions by combining sketching with an iterative method.

There are many iterative methods for least-square problems. Iterative methods generate a sequence of approximate solutions x_1,x_2,\ldots that we hope will converge at a rapid rate to the true least-squares solution, x_\star.

To using sketching to solve least-squares problems iteratively, we can use the following observation:

If S is a sketching matrix for \mathsf{E} = \operatorname{col}(A), then (SA)^\top SA \approx A^\top A.

Therefore, if we compute a QR factorization

    \[SA = QR,\]

then

    \[A^\top A \approx (SA)^\top (SA) = R^\top Q^\top Q R = R^\top R.\]

Notice that we used the fact that Q^\top Q = I since Q has orthonormal columns. The conclusion is that R^\top R \approx A^\top A.

Let’s use the approximation R^\top R \approx A^\top A to solve the least-squares problem iteratively. Start off with the normal equations7As I’ve described in a previous post, it’s generally inadvisable to solve least-squares problems using the normal equations. Here, we’re just using the normal equations as a conceptual tool to derive an algorithm for solving the least-squares problem.

(5)   \[(A^\top A)x = A^\top b. \]

We can obtain an approximate solution to the least-squares problem by replacing A^\top A by R^\top R in (5) and solving. The resulting solution is

    \[x_0 = R^{-1} (R^{-\top}(A^\top b)).\]

This solution x_0 will typically not be a good solution to the least-squares problem (2), so we need to iterate. To do so, we’ll try and solve for the error x - x_0. To derive an equation for the error, subtract A^\top A x_0 from both sides of the normal equations (5), yielding

    \[(A^\top A)(x-x_0) = A^\top (b-Ax_0).\]

Now, to solve for the error, substitute R^\top R for A^\top A again and solve for x, obtaining a new approximate solution x_1:

    \[x\approx x_1 \coloneqq x_0 + R^{-\top}(R^{-1}(A^\top(b-Ax_0))).\]

We can now go another step: Derive an equation for the error x-x_1, approximate A^\top A \approx R^\top R, and obtain a new approximate solution x_2. Continuing this process, we obtain an iteration

(6)   \[x_{i+1} = x_i + R^{-\top}(R^{-1}(A^\top(b-Ax_i))).\]

This iteration is known as the iterative sketching method.8The name iterative sketching is for historical reasons. Original versions of the procedure involved taking a fresh sketch S_iA = Q_iR_i at every iteration. Later, it was realized that a single sketch SA suffices, albeit with a slower convergence rate. Typically, only having to sketch and QR factorize once is worth the slower convergence rate. If the distortion is small enough, this method converges at an exponential rate, yielding a high-accuracy least squares solution after a few iterations.

Let’s apply iterative sketching to the example we considered above. We show the forward error of the sketch-and-solve and direct methods as horizontal dashed purple and red lines. Iterative sketching begins at roughly the forward error of sketch-and-solve, with the error decreasing at an exponential rate until it reaches that of the direct method over the course of fourteen iterations. For this problem, at least, iterative sketching gives high-accuracy solutions to the least-squares problem!

To summarize, we’ve now seen two very different ways of using sketching:

  • Sketch-and-solve. Sketch the data A and b and solve the sketched least-squares problem (3). The resulting solution \hat{x} is cheap to obtain, but may have low accuracy.
  • Iterative sketching. Sketch the matrix A and obtain an approximation R^\top R = (SA)^\top (SA) to A^\top A. Use the approximation R^\top R to produce a sequence of better-and-better least-squares solutions x_i by the iteration (6). If we run for enough iterations q, the accuracy of the iterative sketching solution x_q can be quite high.

By combining sketching with more sophisticated iterative methods such as conjugate gradient and LSQR, we can get an even faster-converging least-squares algorithm, known as sketch-and-precondition. Here’s the same plot from above with sketch-and-precondition added; we see that sketch-and-precondition converges even faster than iterative sketching does!

“Does sketching work?” Even for a simple problem like least-squares, the answer is complicated:

A direct use of sketching (i.e., sketch-and-solve) leads to a fast, low-accuracy solution to least-squares problems. But sketching can achieve much higher accuracy for least-squares problems by combining sketching with an iterative method (iterative sketching and sketch-and-precondition).

We’ve focused on least-squares problems in this section, but these conclusions could hold more generally. If “sketching doesn’t work” in your application, maybe it would if it was combined with an iterative method.

Just How Accurate Can Sketching Be?

We left our discussion of sketching-plus-iterative-methods in the previous section on a positive note, but there is one last lingering question that remains to be answered. We stated that iterative sketching (and sketch-and-precondition) converge at an exponential rate. But our computers store numbers to only so much precision; in practice, the accuracy of an iterative method has to saturate at some point.

An (iterative) least-squares solver is said to be forward stable if, when run for a sufficient number q of iterations, the final accuracy \norm{x_q - x_\star} is comparable to accuracy of a standard direct method for the least-squares problem like MATLAB’s \ command or Python’s scipy.linalg.lstsq. Forward stability is not about speed or rate of convergence but about the maximum achievable accuracy.

The stability of sketch-and-precondition was studied in a recent paper by Meier, Nakatsukasa, Townsend, and Webb. They demonstrated that, with the initial iterate x_0 = 0, sketch-and-precondition is not forward stable. The maximum achievable accuracy was worse than standard solvers by orders of magnitude! Maybe sketching doesn’t work after all?

Fortunately, there is good news:

  • The iterative sketching method is provably forward stable. This result is shown in my newly released paper; check it out if you’re interested!
  • If we use the sketch-and-solve method as the initial iterate x_0 = \hat{x} for sketch-and-precondition, then sketch-and-precondition appears to be forward stable in practice. No theoretical analysis supporting this finding is known at present.9For those interested, neither iterative sketching nor sketch-and-precondition are backward stable, which is a stronger stability guarantee than forward stability. Fortunately, forward stability is a perfectly adequate stability guarantee for many—but not all—applications.

These conclusions are pretty nuanced. To see what’s going, it can be helpful to look at a graph:10For another randomly generated least-squares problem of the same size with condition number 10^{10} and residual 10^{-6}.

The performance of different methods can be summarized as follows: Sketch-and-solve can have very poor forward error. Sketch-and-precondition with the zero initialization x_0 = 0 is better, but still much worse than the direct method. Iterative sketching and sketch-and-precondition with x_0 = \hat{x} fair much better, eventually achieving an accuracy comparable to the direct method.

Put more simply, appropriately implemented, sketching works after all!

Conclusion

Sketching is a computational tool, just like the fast Fourier transform or the randomized SVD. Sketching can be used effectively to solve some problems. But, like any computational tool, sketching is not a silver bullet. Sketching allows you to dimensionality-reduce matrices and vectors, but it distorts them by an appreciable amount. Whether or not this distortion is something you can live with depends on your problem (how much accuracy do you need?) and how you use the sketch (sketch-and-solve or with an iterative method).

Stochastic Trace Estimation

I am delighted to share that me, Joel A. Tropp, and Robert J. Webber‘s paper XTrace: Making the Most of Every Sample in Stochastic Trace Estimation has recently been released as a preprint on arXiv. In it, we consider the implicit trace estimation problem:

Implicit trace estimation problem: Given access to a square matrix A via the matrix–vector product operation \omega \mapsto A\omega, estimate its trace \tr A = \sum_{i=1}^n A_{ii}.

Algorithms for this task have many uses such as log-determinant computations in machine learning, partition function calculations in statistical physics, and generalized cross validation for smoothing splines. I described another application to counting triangles in a large network in a previous blog post.

Our paper presents new trace estimators XTrace and XNysTrace which are highly efficient, producing accurate trace approximations using a small budget of matrix–vector products. In addition, these algorithms are fast to run and are supported by theoretical results which explain their excellent performance. I really hope that you will check out the paper to learn more about these estimators!

For the rest of this post, I’m going to talk about the most basic stochastic trace estimation algorithm, the GirardHutchinson estimator. This seemingly simple algorithm exhibits a number of nuances and forms the backbone for more sophisticated trace estimates such as Hutch++, Nyström++, XTrace, and XNysTrace. Toward the end, this blog post will be fairly mathematical, but I hope that the beginning will be fairly accessible to all.

Girard–Hutchinson Estimator: The Basics

The GirardHutchinson estimator for the trace of a square matrix A is

(1)   \[\hat{\tr} = \frac{1}{m} \sum_{i=1}^m \omega_i^* A \omega_i. \]

Here, \omega_1,\ldots,\omega_m are random vectors, usually chosen to be statistically independent, and {}^* denotes the conjugate transpose of a vector or matrix. The Girard–Hutchinson estimator only depends on the matrix A through the matrix–vector products A\omega_1,\ldots,A\omega_m.

Unbiasedness

Provided the random vectors are isotropic

(2)   \[\mathbb{E} [\omega_i\omega_i^*] = I, \]

the Girard–Hutchinson estimator is unbiased:

(3)   \[\mathbb{E} [\hat{\tr}] = \tr A.\]

Let us confirm this claim in some detail. First, we use linearity of expectation to evaluate

(4)   \[\mathbb{E} [\hat{\tr}] = \mathbb{E} \left[ \frac{1}{m} \sum_{i=1}^m \omega_i^*A\omega_i \right] = \frac{1}{m} \sum_{i=1}^m \mathbb{E} \left[ \omega_i^* A \omega_i\right]. \]

Therefore, to prove that \mathbb{E} [\hat{\tr}] = \tr A, it is sufficient to prove that \mathbb{E} \left[\omega_i^*A\omega_i\right] = \tr A for each i.

When working with traces, there are two tricks that solve 90% of derivations. The first trick is that, if we view a number as a 1\times 1 matrix, then a number equals its trace, x = \tr x. The second trick is the cyclic property: For a k\times p matrix B and a p\times k matrix C, we have \tr (BC) = \tr (CB). The cyclic property should be handled with care when one works with a product of three or more matrices. For instance, we have

    \[\tr[BCD] = \tr[(BC)D] = \tr[D(BC)] = \tr[DBC].\]

However,

    \[\tr [BCD] \ne \tr[CBD] \quad \text{in general}.\]

One should think of the matrix product BCD as beads on a closed loop of string. One can move the last bead D to the front of the other two, \tr [BCD] = \tr[DBC], but not interchange two beads, \tr[BCD] \ne \tr[CBD].

With this trick in hand, let’s return to proving that \mathbb{E} \left[\omega_i^*A\omega_i\right] = \tr A for every i. Apply our two tricks:

    \[\mathbb{E} \left[\omega_i^*A\omega_i\right] = \mathbb{E} \tr \left[\omega_i^*A\omega_i\right] = \mathbb{E} \tr \left[A\omega_i\omega_i^*\right].\]

The expectation is a linear operation and the matrix A is non-random, so we can bring the expectation into the trace as

    \[\mathbb{E} \left[\omega_i^*A\omega_i\right] = \mathbb{E} \tr \left[A\omega_i\omega_i^*\right] = \tr(A \mathbb{E}[\omega_i\omega_i^*] ).\]

Invoke the isotropy condition (2) and conclude:

    \[\mathbb{E} \left[\omega_i^*A\omega_i\right] = \tr(A \mathbb{E}[\omega_i\omega_i^*] ) = \tr(A\cdot I) = \tr A.\]

Plugging this into (4) confirms the unbiasedness claim (3).

Variance

Continue to assume that the \omega_i‘s are isotropic (3) and now assume that \omega_1,\ldots,\omega_m are independent. By independence, the variance can be written as

    \[\Var(\hat{\tr}) = \frac{1}{m^2} \sum_{i=1}^m \Var(\omega_i^*A\omega_i).\]

Assuming that \omega_1,\ldots,\omega_m are identically distributed \omega_1,\ldots,\omega_m \sim \omega, we then get

    \[\Var(\hat{\tr}) = \frac{1}{m} \Var(\omega^*A\omega).\]

The variance decreases like 1/m, which is characteristic of Monte Carlo-type algorithms. Since \hat{\tr} is unbiased (i.e, (3)), this means that the mean square error decays like 1/m so the average error (more precisely root-mean-square error) decays like

    \[\left| \hat{\tr} - \tr A \right| \lessapprox \frac{\mathrm{const}}{\sqrt{m}}.\]

This type of convergence is very slow. If I want to decrease the error by a factor of 10, I must do 100\times the work!

Variance-reduced trace estimators like Hutch++ and our new trace estimator XTrace improve the rate of convergence substantially. Even in the worst case, Hutch++ and XTrace reduce the variance at a rate 1/m^2 and (root-mean-square) error at rates 1/m:

    \[\Var(\hat{\tr}_{\text{H++ or X}}) \le \frac{\mathrm{const}}{m^2},\quad \left| \hat{\tr}_{\text{H++ or X}} - \tr A \right| \lessapprox \frac{\mathrm{const}}{m}.\]

For matrices with rapidly decreasing singular values, the variance and error can decrease much faster than this.

Variance Formulas

As the rate of convergence for the Girard–Hutchinson estimator is so slow, it is imperative to pick a distribution on test vectors \omega that makes the variance of the single–sample estimate \omega^*A\omega as low as possible. In this section, we will provide several explicit formulas for the variance of the Girard–Hutchinson estimator. Derivations of these formulas will appear at the end of this post. These variance formulas help illuminate the benefits and drawbacks of different test vector distributions.

To express the formulas, we will need some notation. For a complex number z = a + bi we use \Re(z) = a and \Im(z) = b to denote the real and imaginary parts. The variance of a random complex number z is

    \[\Var(z) := \mathbb{E} |z - \mathbb{E} z|^2 = \Var(\Re z) + \Var(\Im z).\]

The Frobenius norm of a matrix A is

    \[\left\|A\right\|_{\rm F}^2 = \sum_{i,j} |A_{ij}|^2.\]

If A is real symmetric or complex Hermitian with (real) eigenvalues \lambda_1,\ldots,\lambda_n, we have

(5)   \[\left\|A\right\|_{\rm F}^2 = \sum_{i=1}^n \lambda_i^2. \]

A^\top denotes the ordinary transpose of A and A^* denotes the conjugate transpose of A.

Real-Valued Test Vectors

We first focus on real-valued test vectors \omega. Since \omega is real, we can use the ordinary transpose {}^\top rather than the conjugate transpose {}^*. Since \omega^\top A\omega is a number, it is equal to its own transpose:

    \[\omega^\top A \omega = (\omega^\top A \omega)^\top = \omega^\top A^\top \omega.\]

Therefore,

    \[\omega^\top A\omega = \frac{\omega^\top A \omega + \omega^\top A^\top \omega}{2} = \omega^\top \left( \frac{A + A^\top}{2} \right)\omega.\]

The Girard–Hutchinson trace estimator applied to A is the same as the Girard–Hutchinson estimator applied to the symmetric part of A, (A+A^\top)/2.

For the following results, assume A is symmetric, A = A^\top.

  1. Real Gaussian: \omega_1,\ldots,\omega_m are independent standard normal random vectors.

        \[\Var(\omega^\top A\omega) = 2 \left\|A\right\|_{\rm F}^2.\]

  2. Uniform signs (Rademachers): \omega_1,\ldots,\omega_m are independent random vectors with uniform \pm 1 coordinates.

        \[\Var(\omega^\top A \omega) = 2\sum_{i\ne j} |A_{ij}|^2.\]

  3. Real sphere: Assume \omega_1,\ldots,\omega_n are uniformly distributed on the real sphere of radius \sqrt{n}: \omega \sim \text{Uniform} \{x\in \mathbb{R}^n : x^\top x = n\}.

        \[\Var(\omega^\top A\omega) = \frac{2n}{n+2} \left( \left\|A\right\|_{\rm F}^2 - \frac{1}{n} |\tr A|^2 \right).\]

These formulas continue to hold for nonsymmetric A by replacing A by its symmetric part (A+A^\top)/2 on the right-hand sides of these variance formulas.

Complex-Valued Test Vectors

We now move our focus to complex-valued test vectors \omega. As a rule of thumb, one should typically expect that the variance for complex-valued test vectors applied to a real symmetric matrix A is about half the natural real counterpart—e.g., for complex Gaussians, you get about half the variance than with real Gaussians.

A square complex matrix has a Cartesian decomposition

    \[A = A^{\rm H} + i A^{\rm SH}\]

where

    \[A^{\rm H} = \frac{A+A^*}{2} ,\quad A^{\rm SH} = \frac{A - A^*}{2i}\]

denote the Hermitian and skew-Hermitian parts of A. Similar to how the imaginary part of a complex number is real, the skew-Hermitian part of a complex matrix is Hermitian (and i A^{\rm SH} is skew-Hermitian). Since A^{\rm H} and A^{\rm SH} are both Hermitian, we have

    \[\Re(\omega^* A\omega) = \omega^* A^{\rm H} \omega, \quad \Im (\omega^* A \omega) = \omega^* A^{\rm SH} \omega.\]

Consequently, the variance of \omega^*A \omega can be broken into Hermitian and skew-Hermitian parts:

    \[\Var(\omega^* A\omega) = \Var(\omega^* A^{\rm H}\omega) + \Var(\omega^* A^{\rm SH}\omega).\]

For this reason, we will state the variance formulas only for Hermitian A, with the formula for general A following from the Cartesian decomposition.

For the following results, assume A is Hermitian, A = A^*.

  1. Complex Gaussian: \omega_1,\ldots,\omega_m are independent standard complex random vectors, i.e., each \omega_i has iid entries distributed as (g_1+ig_2)/\sqrt{2} for g_1,g_2 standard normal random variables.

        \[\Var(\omega^* A\omega) = \left\|A\right\|_{\rm F}^2.\]

  2. Uniform phases (Steinhauses): \omega_1,\ldots,\omega_m are independent random vectors whose entries are uniform on the complex unit circle \{ z \in \complex : |z| \}.

        \[\Var(\omega^* A \omega) = \sum_{i\ne j} |A_{ij}|^2.\]

  3. Complex sphere: Assume \omega_1,\ldots,\omega_n are uniformly distributed on the complex sphere of radius \sqrt{n}: \omega \sim \text{Uniform} \{x\in \complex^n : x^* x = n\}.

        \[\Var(\omega^* A\omega) = \frac{n}{n+1} \left( \left\|A\right\|_{\rm F}^2 - \frac{1}{n} |\tr A|^2 \right).\]

Optimality Properties

Let us finally address the question of what the best choice of test vectors is for the Girard–Hutchinson estimator. We will state two results with different restrictions on \omega_1,\ldots,\omega_m.

Our first result, due to Hutchinson, is valid for real symmetric matrices with real test vectors.

Optimality (independent test vectors with independent coordinates). If the test vectors \omega_1,\ldots,\omega_m \in \mathbb{R}^n are isotropic (2), independent from each other, and have independent entries, then for any fixed real symmetric matrix A, the minimum variance for \hat{\tr} is obtained when \omega_1,\ldots,\omega_m are populated with random signs (\omega_i)_j \sim \textnormal{Uniform} \{\pm 1\}.

The next optimality results will have real and complex versions. To present the results for \mathbb{R}-valued and an \complex-valued test vectors on unified footing, let \field denote either \mathbb{R} or \complex. We let a \field-Hermitian matrix be either a real symmetric matrix (if \field = \mathbb{R}) or a complex Hermitian matrix (if \field = \complex). Let a \field-unitary matrix be either a real orthogonal matrix (if \field = \mathbb{R}) or a complex unitary matrix (if \field = \complex).

The condition that the vectors \omega_1,\ldots,\omega_m have independent entries is often too restrictive in practice. It rules out, for instance, the case of uniform vectors on the sphere. If we relax this condition, we get a different optimal distribution:

Optimality (independent test vectors). Consider any set \mathscr{A} of \field-Hermitian matrices which is invariant under \field-unitary similary transformations:

    \[\text{If $A \in \mathscr{A}$ and $U$ is $\field$-unitary, then $U^*AU \in \mathscr{A}$.}\]

Assume that the test vectors \omega_1,\ldots,\omega_m are independent and isotropic (2). The worst-case variance \sup_{A \in \mathscr{A}} \Var(\hat{\tr}) is minimized by choosing \omega_1,\ldots,\omega_m uniformly on the \field-sphere: \omega_1,\ldots,\omega_m \sim \text{Uniform} \{ x \in \field^n : x^*x =n \}.

More simply, if you wants your stochastic trace estimator to be effective for a class of inputs \mathscr{A} (closed under \field-unitary similarity transformations) rather than a single input matrix A, then the best distribution are test vectors drawn uniformly from the sphere. Examples of classes of matrices \mathscr{A} include:

  • Fixed eigenvalues. For fixed real eigenvalues \lambda_1,\ldots,\lambda_n \in \mathbb{R}, the set of all \field-Hermitian matrices with these eigenvalues.
  • Density matrices. The class of all trace-one psd matrices.
  • Frobenius norm ball. The class of all \field-Hermitian matrices of Frobenius norm at most 1.

Derivation of Formulas

In this section, we provide derivations of the variance formulas. I have chosen to focus on derivations which are shorter but use more advanced techniques rather than derivations which are longer but use fewer tricks.

Real Gaussians

First assume A is real. Since A is real symmetric, A has an eigenvalue decomposition A = Q\Lambda Q^\top, where Q is orthogonal and \Lambda is a diagonal matrix reporting A‘s eigenvalues. Since the real Gaussian distribution is invariant under orthogonal transformations, \omega^\top A\omega = (Q^\top \omega)^\top \Lambda (Q^\top\omega) has the same distribution as \omega^\top \Lambda \omega. Therefore,

    \[\Var(\omega^\top A \omega) = \Var(\omega^\top \Lambda \omega) = \Var \left( \sum_{i=1}^n \lambda_i \omega_i^2 \right) = \sum_{i=1}^n \lambda_i^2 \Var(\omega_i^2) = 2\sum_{i=1}^n \lambda_i^2 = 2\left\|A\right\|_{\rm F}^2.\]

Here, we used that the variance of a squared standard normal random variable is two.

For A non-real matrix, we can break the matrix A into its entrywise real and imaginary parts A = \mathfrak{R}(A) + i \, \mathfrak{I}(A). Thus,

    \[\Var(\omega^\top A \omega) = \Var(\omega^\top \mathfrak{R}(A) \omega) + \Var(\omega^\top \mathfrak{I}(A) \omega) = 2\left\|\mathfrak{R}(A)\right\|_{\rm F}^2 + 2\left\|\mathfrak{I}(A)\right\|_{\rm F}^2 = 2\left\|A\right\|_{\rm F}^2.\]

Uniform Signs

First, compute

    \[\omega^\top A \omega - \mathbb{E}[\omega^\top A \omega] = \sum_{i,j=1}^n A_{ij} \omega_i\omega_j - \sum_{i=1}^n A_{ii} = \sum_{i\ne j} A_{ij} \omega_i\omega_j + \sum_{i=1}^n A_{ii}(\omega_i^2-1).\]

For a vector \omega of uniform random signs, we have \omega_i^2 = 1 for every i, so the second sum vanishes. Note that we have assumed A symmetric, so the sum over i\ne j can be replaced by two times the sum over i < j:

    \[\omega^\top A \omega - \mathbb{E}[\omega^\top A \omega] = 2\sum_{i< j} A_{ij} \omega_i\omega_j.\]

Note that \{ \omega_i \omega_j : i < j\} are pairwise independent. As a simple exercise, one can verify that the identity

    \[\Var(a_1 X_1+\cdots+a_kX_k) = |a_1|^2 \Var(X_1) + \cdots + |a_k|^2 \Var(X_k)\]

holds for any pairwise independent family of random variances X_1,\ldots,X_k and numbers a_1,\ldots,a_k. Ergo,

    \begin{align*}\Var(\omega^\top A\omega) &= \Var(\omega^\top A \omega - \mathbb{E}[\omega^\top A \omega]) \\&= \Var\left(\sum_{i< j} 2A_{ij} \omega_i\omega_j\right) \\&= \sum_{i<j} 4 |A_{ij}|^2 \Var(\omega_i\omega_j) \\&= \sum_{i<j} 4 |A_{ij}|^2 \\&= 2 \sum_{i\ne j} |A_{ij}|^2.\end{align*}

In the second-to-last line, we use the fact that \omega_i\omega_j is a uniform random sign, which has variance 1. The final line is a consequence of the symmetry of A.

Uniform on the Real Sphere

The simplest proof is I know is by the “camel principle”. Here’s the story (a lightly edited quotation from MathOverflow):

A father left 17 camels to his three sons and, according to the will, the eldest son was to be given a half of the camels, the middle son one-third, and the youngest son the one-ninth. The sons did not know what to do since 17 is not evenly divisible into either two, three, or nine parts, but a wise man helped the sons: he added his own camel, the oldest son took 18/2=9 camels, the second son took 18/3=6 camels, the third son 18/9=2 camels and the wise man took his own camel and went away.

We are interested in a vector \omega which is uniform on the sphere of radius \sqrt{n}. Performing averages on the sphere is hard, so we add a camel to the problem by “upgrading” \omega to a spherically symmetric vector g which has a random length. We want to pick a distribution for which the computation \Var(g^\top A g) is easy. Fortunately, we already know such a distribution, the Gaussian distribution, for which we already calculated \Var(g^\top A g) = 2\left\|A\right\|_{\rm F}^2.

The Gaussian vector g and the uniform vector \omega on the sphere are related by

    \[g = \sqrt{\frac{a}{n}} \omega,\]

where a is the squared length of the Gaussian vector g. In particular, a has the distribution of the sum of n squared Gaussian random variables, which is known as a \chi^2 random variable with n degrees of freedom.

Now, we take the camel back. Compute the variance of g^\top A g using the chain rule for variance:

    \[\Var(g^\top A g) = \mathbb{E}[\Var(g^\top A g \mid a)] + \Var(\mathbb{E}[g^\top A g \mid a]).\]

Here, \Var(\cdot \mid a) and \mathbb{E}[ \cdot \mid a] denote the conditional variance and conditional expectation with respect to the random variable a. The quick and dirty ways of working with these are to treat the random variable a “like a constant” with respect to the conditional variance and expectation.

Plugging in the formula g = \sqrt{a/n} \cdot \omega and treating a “like a constant”, we obtain

    \begin{align*}\Var(g^\top A g) &= \mathbb{E}[\Var(a/n \cdot \omega^\top A \omega \mid a)] + \Var(\mathbb{E}[a/n \cdot \omega^\top A \omega \mid a]) \\&=\mathbb{E}[(a/n)^2\Var(\omega^\top A \omega)] + \Var(a/n \cdot \mathbb{E}[\omega^\top A \omega]) \\&= \frac{1}{n^2} \mathbb{E}[a^2] \cdot \Var(\omega^\top A \omega) + \frac{1}{n^2} \Var(a) |\mathbb{E} [\omega^\top A \omega]|^2.\end{align*}

As we mentioned, a is a \chi^2 random variable with n degrees of freedom and \mathbb{E}[a^2] and \Var(a) are known quantities that can be looked up:

    \[\mathbb{E}[a^2] = n(n+2), \quad \Var(a) = 2n.\]

We know \Var(g^\top A g) = 2\left\|A\right\|_{\rm F}^2 and \mathbb{E} [\omega^\top A \omega] = \tr A. Plugging these all in, we get

    \[2\left\|A\right\|_{\rm F}^2 = \frac{n+2}{n} \Var(\omega^\top A\omega) + \frac{2}{n} |\tr A|^2.\]

Rearranging, we obtain

    \[\Var(\omega^\top A\omega) = \frac{2n}{n+2} \left( \left\|A\right\|_{\rm F}^2 - \frac{1}{n}|\tr A|^2\right).\]

Complex Gaussians

The trick is the same as for real Gaussians. By invariance of complex Gaussian random vectors under unitary transformations, we can reduce to the case where A is a diagonal matrix populated with eigenvalues \lambda_1,\ldots,\lambda_n. Then

    \[\Var(\omega^*A \omega) = \Var \left( \sum_{i=1}^n \lambda_i |\omega_i|^2 \right) = \sum_{i=1}^n \Var(|\omega_i|^2) \lambda_i^2 = \sum_{i=1}^n \lambda_i^2 = \left\|A\right\|_{\rm F}^2.\]

Here, we use the fact that 2|\omega_i|^2 is a \chi^2 random variable with two degrees of freedom, which has variance four.

Random Phases

The trick is the same as for uniform signs. A short calculation (remembering that A is Hermitian and thus \overline{A_{ij}} = A_{ji}) reveals that

    \[\Var\left( \omega^* A \omega \right) = \Var \left( \sum_{i<j} 2 \Re(A_{ij} \overline{\omega_i} \omega_j) \right).\]

The random variables \{\overline{\omega_i} \omega_j : i < j\} are pairwise independent so we have

    \[\Var\left( \omega^* A \omega \right) = \Var \left( \sum_{i<j} 2 \Re(A_{ij} \overline{\omega_i} \omega_j) \right) = 4\sum_{i<j} \Var \left( \Re(A_{ij} \overline{\omega_i} \omega_j) \right).\]

Since \overline{\omega}_i \omega_j is uniformly distributed on the complex unit circle, we can assume without loss of generality that A_{ij} = |A_{ij}|. Thus, letting \phi be uniform on the complex unit circle,

    \[\Var\left( \omega^* A \omega \right) = 4\sum_{i<j} \Var \left( |A_{ij}|\Re(\phi)) \right) = 4\Var\left( \Re(\phi) \right)\sum_{i<j}|A_{ij}|^2.\]

The real and imaginary parts of \phi have the same distribution so

    \[1 = \Var(\phi) = \Var(\Re \phi) + \Var(\Im \phi) = 2 \Var(\Re \phi)\]

so \Var(\Re \phi) = 1/2. Thus

    \[\Var\left( \omega^* A \omega \right) = 2 \sum_{i<j}|A_{ij}|^2 = \sum_{i\ne j} |A_{ij}|^2.\]

Uniform on the Complex Sphere: Derivation 1 by Reduction to Real Case

There are at least three simple ways of deriving this result: the camel trick, reduction to the real case, and Haar integration. Each of these techniques illustrates a trick that is useful in its own right beyond the context of trace estimation. Since we have already seen an example of the camel trick for the real sphere, I will present the other two derivations.

Let us begin with the reduction to the real case. Let \mathfrak{R}(\cdot) and \mathfrak{I}(\cdot) denote the real and imaginary parts of a vector or matrix, taken entrywise. The key insight is that if \omega is a uniform random vector on the complex sphere of radius \sqrt{n}, then

    \[\mathscr{R}(\omega) := \twobyone{\mathfrak{R}(\omega)}{\mathfrak{I}(\omega)}\in\real^{2n} \quad \text{is a uniform random vector on the real sphere of radius $\sqrt{n}$}.\]

We’ve converted the complex vector \omega into a real vector \mathscr{R}(\omega).

Now, we need to convert the complex matrix A into a real matrix \mathscr{R}(A). To do this, recall that one way of representing complex numbers is by 2\times 2 matrices:

    \[a + bi \iff \twobytwo{a}{-b}{b}{a}.\]

Using this correspondence addition and multiplication of complex numbers can be carried by addition and multiplication of the corresponding matrices.

To convert complex matrices to real matrices, we use a matrix-version of the same representation:

    \[\mathscr{R}(A) = \twobytwo{\mathfrak{R}(A)}{-\mathfrak{I}(A)}{\mathfrak{I}(A)}{\mathfrak{R}(A)}.\]

One can check that addition and multiplication of complex matrices can be carried out by addition and multiplication of the corresponding “realified” matrices, i.e.,

    \[\mathscr{R}(A + B) = \mathscr{R}(A) + \mathscr{R}(B), \quad \mathscr{R}(A\cdot B) = \mathscr{R}(A) \cdot \mathscr{R}(B)\]

holds for all complex matrices A and B.

We’ve now converted complex matrix A and vector \omega into real matrix \mathscr{R}(A) and vector \mathscr{R}(\omega). Let’s compare \omega^*A\omega to \mathscr{R}(\omega)^\top\mathscr{R}(A)\mathscr{R}(\omega). A short calculation reveals

    \[\omega^*A\omega = \mathscr{R}(\omega)^\top \mathscr{R}(A)\mathscr{R}(\omega) .\]

Since \mathscr{R}(\omega) is a uniform random vector on the sphere of radius \sqrt{n}, \sqrt{2}\cdot \mathscr{R}(\omega) is a uniform random vector on the sphere of radius \sqrt{2n}. Thus, by the variance formula for the real sphere, we get

    \[\Var(\omega^*A\omega) = \Var[(\sqrt{2}\mathscr{R}(\omega))^\top (\mathscr{R}(A)/2)(\sqrt{2}\mathscr{R}(\omega) )] = \frac{4n}{2n+2} \left[ \|\mathscr{R}(A)/2\|_{\rm F}^2 - \frac{1}{8n}(\tr\mathscr{R}(A))^2 \right].\]

A short calculation verifies that \tr \mathscr{R}(A) = 2\tr A and \norm{\mathscr{R}(A)}_{\rm F}^2 = 2\|A\|_{\rm F}^2. Plugging this in, we obtain

    \[\Var(\omega^*A\omega)= \frac{n}{n+1} \left[ \|A\|_{\rm F}^2 - \frac{1}{n}(\tr A)^2  \right].\]

Uniform on the Complex Sphere: Derivation 2 by Haar Integration

The proof by reduction to the real case requires some cumbersome calculations and requires that we have already computed the variance in the real case by some other means. The method of Haar integration is more slick, but it requires some pretty high-power machinery. Haar integration may be a little bit overkill for this problem, but this technique is worth learning as it can handle some truly nasty expected value computations that appear, for example, in quantum information.

We seek to compute

    \[\mathbb{E} [(\omega^*A \omega)^2].\]

The first trick will be to write this expession using a single matrix trace using the tensor (Kronecker) product \otimes. For those unfamiliar with the tensor product, the main properties we will be using are

(6)   \[(A\otimes B) (C\otimes D) = (AB) \otimes (CD), \quad \tr(A\otimes B) = \tr A \cdot \tr B. \]

We saw in the proof of unbiasedness that

    \[\omega^* A \omega = \tr (\omega^*A\omega) = \tr (A \omega\omega^*).\]

Therefore, by (6),

    \[(\omega^*A\omega)^2 = (\tr [A \omega\omega^*])^2 = \tr [A\omega\omega^* \otimes A\omega\omega^*] = \tr [(A\otimes A) (\omega\omega^* \otimes \omega\omega^*)].\]

Thus, to evaluate \mathbb{E}[(\omega^*A\omega)^2], it will be sufficient to evaluate \mathbb{E}[\omega\omega^* \otimes \omega\omega^*]. Forunately, there is a useful formula for these expectation provided by a field of mathematics known as representation theory (see Lemma 1 in this paper):

    \[\mathbb{E}[ \omega\omega^* \otimes \omega\omega^*] = \frac{2n}{n+1} \operatorname{Proj}_{\operatorname{Sym}^2(\complex^n)}.\]

Here, \operatorname{Proj}_{\operatorname{Sym}^2(\complex^n)} is the orthogonal projection onto the space of symmetric two-tensors \operatorname{Sym}^2(\complex^n) = \operatorname{span} \{ v \otimes v : v \in \complex^n \}. Therefore, we have that

    \[\mathbb{E}[(\omega^*A\omega)^2] = \tr [(A\otimes A) \mathbb{E}(\omega\omega^* \otimes \omega\omega^*)] = \frac{2n}{n+1} \tr [(A\otimes A) \operatorname{Proj}_{\operatorname{Sym}^2(\complex^n)}].\]

To evalute the trace on the right-hand side of this equation, there is another formula (see Lemma 6 in this paper):

    \[\tr \left[(A\otimes B) \operatorname{Proj}_{\operatorname{Sym}^2(\complex^n)}\right] = \frac{1}{2} \left( \tr(AB) + \tr A \cdot \tr B \right).\]

Therefore, we conclude

    \begin{align*}\Var(\omega^* A \omega) &= \mathbb{E}[(\omega^*A\omega)^2] - (\mathbb{E}[\omega^*A\omega])^2 \\&= \frac{2n}{n+1}\tr [(A\otimes A) \operatorname{Proj}_{\operatorname{Sym}^2(\complex^n)}] - (\tr A)^2 \\&= \frac{n}{n+1}\left[ \tr A^2 + (\tr A)^2 \right] - (\tr A)^2 \\&= \frac{n}{n+1}\left[ \left\|A\right\|_{\rm F}^2 - \frac{1}{n} (\tr A)^2 \right].\end{align*}

Proof of Optimality Properties

In this section, we provide proofs of the two optimality properties.

Optimality: Independent Vectors with Independent Coordinates

Assume A is real and symmetric and suppose that \omega is isotropic (2) with independent coordinates. The isotropy condition

    \[\mathbb{E}[\omega\omega^\top] = I\]

implies that \mathbb{E}[\omega_i\omega_j] = \delta_{ij}, where \delta is the Kronecker symbol. Using this fact, we compute the second moment:

    \begin{align*}\mathbb{E}[ (\omega^*A \omega)^2] &= \mathbb{E}\left[ \left( \sum_{i=1}^n A_{ii} \omega_i^2 +2 \sum_{i<j} A_{ij}\omega_i\omega_j) \right)^2\right] \\&= \sum_{i=1}^n A_{ii}^2 \mathbb{E}[\omega_i^4] + \sum_{i<j} (2A_{ii}A_{jj}+4A_{ij}^2) \mathbb{E}[\omega_i^2]\mathbb{E}[\omega_j^2] \\&= \sum_{i=1}^n A_{ii}^2 \mathbb{E}[\omega_i^4] + \sum_{i<j} (2A_{ii}A_{jj}+4A_{ij}^2) .\end{align*}

Thus

    \[\Var(\omega^*A\omega) = \mathbb{E}[ (\omega^*A \omega)^2] - (\mathbb{E}[\omega^* A \omega])^2 = \sum_{i=1}^n A_{ii}^2 (\mathbb{E}[|\omega_i|^4]-1) + 4\sum_{i<j} A_{ij}^2.\]

The variance is minimized by choosing \omega with \mathbb{E} \omega_i^4 as small as possible. Since \mathbb{E} \omega_i^2 = 1, the smallest possible value for \mathbb{E} \omega_i^4 is \mathbb{E} \omega_i^4 = 1, which is obtained by populating \omega with random signs.

Optimality: Independent Vectors

This result appears to have first been proven by Richard Kueng in unpublished work. We use an argument suggested to me by Robert J. Webber.

Assume \mathscr{A} is a class of \field-Hermitian matrices closed under \field-unitary similarity transformations and that \omega is an isotropic random vector (2). Decompose the test vector as

    \[\omega = a \cdot s \quad \text{for} \quad a \in [0,+\infty), \: s \in\{x\in \field^n : x^*x = n \}.\]

First, we shall show that the variance is reduced by replacing s with a vector t drawn uniformly from the sphere

(7)   \[\sup_{A\in\mathscr{A}} \Var(\tilde{\omega}^*A\tilde{\omega}) \le \sup_{A\in\mathscr{A}} \Var(\omega^*A\omega \]

where

(8)   \[\tilde{\omega} = a\cdot t \quad \text{and}\quad t\sim \text{Uniform} \{ x \in \field^n :x^*x = n \} \quad \text{is independent of $a$}. \]

Note that such a t can be generated as t = Qs for a uniformly random \field-unitary matrix Q. Therefore, we have

    \begin{align*}\sup_{A\in\mathscr{A}} \Var(\tilde{\omega}^*A\tilde{\omega})&= \sup_{A\in\mathscr{A}} \left[\mathbb{E}[(\tilde{\omega}^*A\tilde{\omega})^2] - (\tr A)^2\right]\\&= \sup_{A\in\mathscr{A}} \left[\mathbb{E}[a^2 \cdot s^*(Q^*AQ)s] - (\tr (Q^*AQ))^2\right].\end{align*}

Now apply Jensen’s inequality only over the randomness in Q to obtain

    \begin{align*}\sup_{A\in\mathscr{A}} \Var(\tilde{\omega}^*A\tilde{\omega})&= \sup_{A\in\mathscr{A}} \left[\mathbb{E}[a^2 \cdot s^*(Q^*AQ)s] - (\tr (Q^*AQ))^2\right] \\&\le \mathbb{E}_Q \sup_{A\in\mathscr{A}} \left[\mathbb{E}_{a,s}[a^2 \cdot s^*(Q^*AQ)s] - (\tr (Q^*AQ))^2\right].\end{align*}

Finally, note that since \mathscr{A} is closed under \field-unitary similarity transformations, the supremum over Q^*AQ for A \in \mathscr{A} is the same as the supremum of A \in \mathscr{A}, so we obtain

    \begin{align*}\sup_{A\in\mathscr{A}} \Var(\tilde{\omega}^*A\tilde{\omega})&\le \mathbb{E}_Q \sup_{A\in\mathscr{A}} \left[\mathbb{E}_{a,s}[a^2 \cdot s^*(Q^*AQ)s] - (\tr (Q^*AQ))^2\right] \\&= \mathbb{E}_Q \sup_{A\in\mathscr{A}} \left[\mathbb{E}_{a,s}[a^2 \cdot s^*As] - (\tr A)^2\right] \\&= \sup_{A\in\mathscr{A}} \Var(\omega^*A\omega).\end{align*}

We have successfully proven (7). This argument is a specialized version of a far more general result which appears as Proposition 4.1 in this paper.

Next, we shall prove

(9)   \[\sup_{A\in\mathscr{A}} \Var(t^*At) \le \sup_{A\in\mathscr{A}} \Var(\tilde{\omega}^*A\tilde{\omega}), \]

where t is still defined as in (8). Indeed, using the chain rule for variance, we obtain

    \begin{align*}\Var(\tilde{\omega}^*A\tilde{\omega})&= \Var(a^2\cdot t^*At) \\&= \mathbb{E}[\Var(a^2\cdot t^* A t \mid a)] + \Var(\mathbb{E}[a^2\cdot t^* A t \mid a]) \\&= \mathbb{E}[a^4]\Var(t^* A t )+ (\tr A)^2\Var(a^2) \\&\ge \mathbb{E}[a^4]\Var(t^* A t ).\end{align*}

Here, we have used that t is uniform on the sphere and thus \mathbb{E}[t^*At] = \tr A. By definition, a is the length of \omega divided by \sqrt{n}. Therefore,

    \[\mathbb{E}[a^2] = \frac{1}{n}\mathbb{E}[\omega^*\omega] = \frac{1}{n} \mathbb{E}[\tr (\omega\omega^*)] = \frac{1}{n} \tr (\mathbb{E}[\omega\omega^*]) = \frac{\tr I}{n} = 1.\]

Therefore, by Jensen’s inequality,

    \[\mathbb{E}[a^4] = \mathbb{E}[(a^2)^2] \ge (\mathbb{E}[a^2])^2 = 1.\]

Thus

    \[\Var(\tilde{\omega}^*A\tilde{\omega}) \ge \mathbb{E}[a^4]\Var(t^* A t ) \ge \Var(t^*At) \quad \text{for every }A,\]

which proves (9).

The Vandermonde Decomposition

In this post, I want to discuss a beautiful and somewhat subtle matrix factorization known as the Vandermonde decomposition that appears frequently in signal processing and control theory. We’ll begin from the very basics, introducing the controls-and-signals context, how the Vandermonde decomposition comes about, and why it’s useful. By the end, I’ll briefly share how we can push the Vandermonde decomposition beyond matrices to the realm of tensors, which will can allow us to separate mixed signals from multiple measurements. This tensorial generalization plays an important role in my paper (L_r,L_r,1)-decompositions, sparse component analysis, and the blind separation of sums of exponentials, joint work with Nithin Govindajaran and Lieven De Lathauwer, which recently appeared in the SIAM Journal of Matrix Analysis and Applications.

Finding the Frequencies

Suppose I give you a short recording of a musical chord consisting of three notes. How could you determine which three notes they were? Mathematically, we can represent such a three-note chord as a combination of scaled and shifted cosine functions

(1)   \[f(t) = a_0 \cos(\omega_0 t - \phi_0) + a_1 \cos(\omega_1 t - \phi_1) + a_2 \cos(\omega_2 t - \phi_2). \]

We are interested in obtaining the (angular) frequencies \omega_1, \omega_2, and \omega_3.

In the extreme limit, when we are given the values of the signal for all t, both positive and negative, the frequencies are immediately given by taking a Fourier transform of the function f(\cdot). In practice, we only have access to the function f at certain times t_0,\ldots,t_{n-1} which we assume are equally spaced

    \[t_j = j\Delta t \quad \textnormal{for} \quad j = 0,1,2,\ldots,n-1.\]

Given the samples

    \[f_j = f(t_j) \quad \textnormal{for} \quad j = 0,1,2,\ldots,n-1\]

we could try to identify \omega_1, \omega_2, and \omega_3 using a discrete Fourier transform.1The discrete Fourier transform can be computed very quickly using the fast Fourier transform, as I discussed in a previous post. Unfortunately, this generally requires a large number of samples to identify \omega_1, \omega_2, and \omega_3 accurately. (The accuracy scales roughly like 1/n, where n is the number of samples.) We are interested in finding a better way to identify the frequencies.

Now that we’ve moved from the function f(\cdot), defined for any real input t, to a set of samples f_0,f_1,\ldots,f_{n-1} it will be helpful to rewrite our formula (1) for f in a different way. By Euler’s identity, the cosines can be rewritten as

    \[\cos \alpha = \frac{\mathrm{e}^{\mathrm{i} \alpha}+\mathrm{e}^{-\mathrm{i} \alpha}}{2}.\]

As a consequence, we can rewrite one of the frequency components in (1) as

    \[a_0 \cos(\omega_0 t - \phi_0) = d_0 \mathrm{e}^{\mathrm{i} \omega_0t} + d_1 \mathrm{e}^{-\mathrm{i} \omega_0t}.\]

Here, d_0 and d_1 are complex coefficients d_0 = a_0 \mathrm{e}^{-\mathrm{i} \phi_0}/2 and d_1 = a_0 \mathrm{e}^{\mathrm{i} \phi_0}/2 which contain the same information as the original parameters a_0 and \phi_0. Now notice that we are only interest in values t_j = j\, \Delta t which are multiples of the spacing \Delta t. Thus, our frequency component can be further rewritten as

    \[a_0 \cos(\omega_0 t_j - \phi_0) = d_0 z_0^j + d_1 z_1^j\]

where z_0 := \mathrm{e}^{\mathrm{i} \omega_0\, \Delta t} and z_1 := \mathrm{e}^{-\mathrm{i}\omega_0\, \Delta t}. Performing these reductions, our samples f_j take the form

(2)   \[f_j = d_0 z_0^j + d_1 z_1^j + \cdots + d_5 z_5^j. \]

We’ve now reformulated our frequency problems in identifying the parameters d_0,\ldots,d_5 and z_0,\ldots,z_5 in the relation (2) from a small number of measurements f_0,f_1,\ldots,f_{n-1}.

Frequency Finding as a Matrix Factorization

We will return to the algorithmic problem of identifying the parameters in the relation (2) from measurements in a little bit. First, we will see that (2) can actually be written as a matrix factorization. Understanding computations by matrix factorization has been an extremely successful paradigm in applied mathematics, and we will see in this post how viewing (2) as a matrix factorization can be very useful.

While it may seem odd at first,2As pointed out to me on math stack exchange, one reason forming the Hankel matrix is sensible is because it effectively augments the sequence of numbers f_0,f_1,\ldots,f_{n-1} into a sequence of vectors given by the columns of H. This can reveal patterns in the sequence which are less obvious when it is represented as given just as numbers. For instance, any seven columns of H are linearly dependent, a surprising fact since the columns of H have length (n-1)/2+1 which can be much larger than seven. In addition, as we will soon effectively exploit later, vectors in the nullspace of H (or related Hankel matrices derived from the sequence) give recurrence relations obeyed by the sequence. This speaks to a general phenomenon where properties of sequence (say, arising from snapshots of a dynamical system) can sometimes become more clear by this procedure of delay embedding. it will be illuminating to repackage the measurements f_0,f_1,\ldots,f_{n-1} as a matrix:

(3)   \[H = \begin{bmatrix} f_0 & f_1 & f_2 & \cdots & f_{(n-1)/2} \\f_1 & f_2 & f_3 & \cdots & f_{(n-1)/2+1} \\f_2 & f_3 & f_4 & \cdots & f_{(n-1)/2+2} \\\vdots & \vdots & \vdots & \ddots & \vdots \\f_{(n-1)/2} & f{(n-1)/2+1} & f{(n-1)/2+2} & \cdots & f_{n-1} \end{bmatrix}. \]

Here, we have assumed n is odd. The matrix H is known as the Hankel matrix associated with the sequence f_0,\ldots,f_{n-1}. Observe that the entry in position ij of H depends only on the sum of the indices i and j, H_{ij} = f_{i+j}. (We use a zero-indexing system to label the rows and columns of H where, for instance, the first row of H is row 0.)

Let’s see how we can interpret the frequency decomposition (2) as a factorization of the Hankel matrix H. We first write out H_{ij} using (2):

(4)   \[H_{ij} = f_{i+j} = \sum_{k=0}^5 d_k z_k^{i+j} = \sum_{k=0}^5 d_k z_k^i \cdot z_k^j. \]

The power z_k^{i+j} was just begging to be factorized as z_k^i\cdot z_k^j, which we did. Equation (4) almost looks like the formula for the product of two matrices with entries z_k^i, so it makes sense to introduce the 6\times (n-1)/2 matrix V with entry V_{ki} = z_k^i. This is a so-called Vandermonde matrix associated with z_0,\ldots,z_5 and has the form

    \[V = \begin{bmatrix}z_0^0 & z_0^1 & z_0^2 & \cdots & z_0^{(n-1)/2} \\z_1^0 & z_1^1 & z_1^2 & \cdots & z_1^{(n-1)/2} \\\vdots & \vdots & \vdots & \ddots & \vdots \\z_5^0 & z_5^1 & z_5^2 & \cdots & z_5^{(n-1)/2}\end{bmatrix}.\]

If we also introduce the 6\times 6 diagonal matrix D = \operatorname{diag}(d_0,d_1,\ldots,d_5), the formula (4) for H can be written as the matrix factorization3In the Vandermonde decomposition H=V^\top D V, the factor V appears transposed even when V is populated with complex numbers! This differs from the usual case in linear algebra where we use the conjugate transpose rather than the ordinary transpose when working with complex matrices. As a related issue, observe that if at least one of the measurements f_0,\ldots,f_{n-1} is a (non-real) complex number, the Hankel matrix H is symmetric but not Hermitian.

(5)   \[H = V^\top D V. \]

This is the Vandermonde decomposition of the Hankel matrix H, a factorization of H as a product of the transpose of a Vandermonde matrix, a diagonal matrix, and that same Vandermonde matrix.

The Vandermonde decomposition immediately tells us all the information d_0,\ldots,d_5 and z_0,z_1,\ldots,z_5 describing our sampled recording f_0,\ldots,f_{n-1} via (2). Thus, the problem of determining d_0,\ldots,d_5 and z_0,z_1,\ldots,z_5 is equivalent to finding the Vandermonde decomposition (5) of the Hankel matrix H.

Computing the Vandermonde Decomposition: Prony’s Method

Computing the Vandermonde decomposition accurately can be a surprisingly hard task, particularly if the measurements f_0,f_1,\ldots,f_{n-1} are corrupted by even a small amount of measurement error. In view of this, I want to present a very classical way of computing this decomposition (dating back to 1795!) known as Prony’s method. This method is conceptually simple and will be a vehicle to continue exploring frequency finding and its connection with Hankel matrices. It remains in use, though it’s accuracy may be significantly worse compared to other methods.

As a first step to deriving Prony’s method, let’s reformulate the frequency finding problem in a different way. Sums of cosines like the ones in our expression (1) for the function f(\cdot) often appear as the solution to a (linear) ordinary differential equation (ODE). This means that one way we could find the frequencies comprising f(\cdot) would be to find a differential equation which f(\cdot) satisfies. Together with the initial condition f(0), determining all the frequencies would be very straightforward.

Since we only have access to samples f_0, f_1,\ldots,f_{n-1} of f(\cdot) at regular time intervals, we will instead look for the “discrete-time” analog of a linear ODE, a linear recurrence relation. This is an expression of the form

(6)   \[f_m = c_{k-1} f_{m-1} + \cdots + c_1f_{m-k+1}+ c_0 f_{m-k} \quad \textnormal{for every} \quad m = k,\,{k+1},\,{k+2},\cdots. \]

In our case, we’ll have k = 6 because there are six terms in the formula (2) for f_j. Together with initial conditions f_0,f_1,\ldots,f_{k-1}, such a recurrence will allow us to determine the parameters z_0,\ldots,z_5 and d_0,\ldots,d_5 in our formula (2) for our sampled recordings f_0,\ldots,f_{n-1} and hence also allow us to compute the Vandermonde decomposition (5).

Observe that the recurrence (6) is a linear equation in the variables c_0,\ldots,c_5. A very good rule of thumb in applied mathematics is to always write down linear equations in matrix–vector notation in see how it looks. Doing this, we obtain

(7)   \[\begin{bmatrix} f_6 \\ f_7 \\ \vdots \\ f_{n-1} \end{bmatrix} = \underbrace{\begin{bmatrix} f_0 & f_1 & f_2 & f_3 & f_4 & f_5 \\ f_1 & f_2 & f_3 & f_4 & f_5 & f_6 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ f_{n-7} & f_{n-6} & f_{n-5} & f_{n-4} & f_{n-3} & f_{n-2} \end{bmatrix}}_{=F}\begin{bmatrix} c_0 \\ c_1 \\ \vdots \\ c_5 \end{bmatrix}. \]

Observe that the matrix on the right-hand side of this equation is also a Hankel matrix (like H in (3)) formed from the samples f_0,\ldots,f_{n-1}. Call this Hankel matrix F. Unlike H in (3), F is rectangular. If n is much larger than 6, F will be tall, possessing many more rows than columns. We assume n > 12 going forward.4n=12 would also be fine for our purposes, but we assume n > 12 to illustrate this highly typical case.

Let’s write (7) a little more compactly as

(8)   \[f_{6 \, :\, n-1} = F c, \]

where we’ve introduced f_{6\,:\, n-1} for the vector on the left-hand side of (7) and collected the recurrence coefficients c_0,\ldots,c_5 into a vector c. For a typical system of linear equations like (8), we would predict the system to have no solution c: Because F has more rows than columns (if n > 12), the system equations (8) has more equations than unknowns. Fortunately, we are not in the typical case. Despite the fact that we have more equations than unknowns, the linear equations (8) have a unique solution c.5This solution can be computed by solving the 6\times 6 system of linear equations \begin{bmatrix} f_6 \\ f_7 \\ \vdots \\ f_{11} \end{bmatrix} = \begin{bmatrix} f_0 & f_1 & \cdots & f_5 \\ f_1 & f_2 & \cdots & f_6 \\ \vdots & \vdots & \ddots & \vdots \\ f_{5} & f_6 & \cdots & f_{11} \end{bmatrix}\begin{bmatrix} c_0 \\ c_1 \\ \vdots \\ c_5\end{bmatrix}. In particular, the matrix on the right-hand side of this equation is guaranteed to be nonsingular under our assumptions. Using the Vandermonde decomposition, can you see why? The existence of a unique solution is a consequence of the fact that the samples f_0,\ldots,f_{n-1} satisfy the formula (2). As a fun exercise, you might want to verify the existence of a unique c satisfying (8)!

As a quick aside, if the measurements f_0,\ldots,f_{n-1} are corrupted by small measurement errors, then the equations (8) will usually not possess a solution. In this case, it would be appropriate to find the least squares solution to equation (8) as a way of mitigating these errors.

Hurrah! We’ve found the coefficients c_0,\ldots,c_5 providing a recurrence relation (6) for our measurements f_0,\ldots,f_{n-1}. All that is left is to find the parameters z_0,\ldots,z_5 and d_0,\ldots,d_5 in our signal formula (2) and the Vandermonde decomposition (5). Fortunately, this is just a standard computation for linear recurrence relations, nicely paralleling the solution of (homogenous) linear ODEs by means of the so-called “characteristic equation”. I’ll go through fairly quickly since this material is well-explained elsewhere on the internet (like Wikipedia). Let’s guess that our recurrence (6) has a solution of the form f_j = z^j; we seek to find all complex numbers z for which this is a bonafide solution. Plugging this solution into the formula (6) for f_6 gives

(9)   \[z^6 = c_0 z^5 + c_1 z^4 + \cdots + c_6. \]

This is the so-called characteristic equation for the recurrence (6). As a single-variable polynomial equation of degree six, it has six complex solutions z_0,z_1,\ldots,z_5. These numbers z_0,z_1,\ldots,z_5 are precisely those numbers which appear in the sequence formula (2) and the Vandermonde decomposition (5).

Finally, we need to compute the coefficients d_0,\ldots,d_5. But this is easy. Observe that the formula (2) provides the following system of linear equations for d_0,\ldots,d_5:

(10)   \[\begin{bmatrix}f_0 \\ f_1 \\ \vdots \\ f_{n-1}\end{bmatrix} = \begin{bmatrix}1 & 1 & \cdots & 1 \\ z_0 & z_1 & \cdots & z_5 \\ \vdots & \vdots & \ddots & \vdots \\ z_0^{n-1} & z_1^{n-1} & \cdots & z_5^{n-1}\end{bmatrix} \begin{bmatrix}d_0 \\ d_1 \\ \vdots \\ d_{n-1}.\end{bmatrix}. \]

Again, this system of equations will have a unique solution if the measurements f_0,\ldots,f_{n-1} are uncorrupted by errors (and can be solved in the least squares sense if corrupted). This gives d_0,\ldots,d_5, completing our goal of computing the parameters in the formula (2) or, equivalently, finding the Vandermonde decomposition (5).

We have accomplished our goal of computing the Vandermonde decomposition. The approach by which we did so is known as Prony’s method, as mentioned in the introduction to this section. As suggested, this method may not always give high-accuracy results. There are two obvious culprits that jump out about why this is the case. Prony’s method requires solving for the roots of the polynomial equation (9) expressed “in the monomial basis” and solving a system of linear equations (10) with a (transposed) Vandermonde matrix. Both of these problems can be notoriously ill-conditioned and thus challenging to solve accurately and may require the measurements f_0,\ldots,f_{n-1} to be done to very high accuracy. Notwithstanding this, Prony’s method does useful results in some cases and forms the basis for potentially more accurate methods, such as those involving generalized eigenvalue problems.

Separating Signals: Extending the Vandermonde Decomposition to Tensors

In our discussion of the frequency identification problem, the Vandermonde decomposition (5) has effectively been an equivalent way of showing the samples f_j are a combination of exponentials z^j. So far, the benefits of the matrix factorization perspective have yet to really reveal themselves.

So what are the benefits of the Vandermonde decompostions? A couple of nice observations related to the Vandermonde decomposition and the “Hankelization” of the signals H have already been lurking in the background. For instance, the rank of the Hankel matrix H is the number of frequency components z_k needed to describe the samples and the representation of the samples as a mixture of exponentials is uniquely determined only if the matrix H does not have full rank; I have a little more to say about this at the very end. There are also benefits to certain computational problems; one can use Vandermonde decompositions to compute super high accuracy singular value decompositions of Hankel matrices.

The power of the Vandermonde decomposition really starts to shine when we go beyond the basic frequency finding problem we discussed by introducing more signals. Suppose now there are three short recordings f^{(1)}(\cdot), f^{(2)}(\cdot), and f^{(3)}(\cdot). (Here, the superscript denotes an index rather than differentiation.) Each signal is a weighted mixture of three sources s^{(1)}(\cdot), s^{(2)}(\cdot), and s^{(3)}(\cdot), each of which plays a musical chord of three notes (thus representable as a sum of cosines as in (1)). One can think of the sources of being produced three different musical instruments at different places in a room and the recordings f^{(1)}(\cdot), f^{(2)}(\cdot), and f^{(3)}(\cdot) being taken from different microphones in the room.6This scenario of instruments and microphones ignores the finite propagation speed of sound, which also would introduce time delays in the sources in the recorded signals. We effectively treat the speed of sound as being instantaneous. Our goal is now not just to identify the musical notes in the recordings but also to identify how to assign those notes to reconstruct the source signals s^{(1)}(\cdot), s^{(2)}(\cdot), and s^{(3)}(\cdot).

Taking inspiration from earlier, we record samples f_0^{(\ell)},\ldots,f_{n-1}^{(\ell)} for each recording \ell = 1,2,3 and form each collection of samples into a Hankel matrix

    \[H^{(\ell)} = \begin{bmatrix} f_0^{(\ell)} & f_1^{(\ell)} & f_2^{(\ell)} & \cdots & f_{(n-1)/2}^{(\ell)} \\f_1^{(\ell)} & f_2^{(\ell)} & f_3^{(\ell)} & \cdots & f_{(n-1)/2+1}^{(\ell)} \\f_2^{(\ell)} & f_3^{(\ell)} & f_4^{(\ell)} & \cdots & f_{(n-1)/2+2}^{(\ell)} \\\vdots & \vdots & \vdots & \ddots & \vdots \\f_{(n-1)/2}^{(\ell)} & f_{(n-1)/2+1}^{(\ell)} & f_{(n-1)/2+2}^{(\ell)} & \cdots & f_{n-1}^{(\ell)} \end{bmatrix}.\]

Here comes the crazy part: Stack the Hankelized recordings H^{(1)}, H^{(2)}, and H^{(3)} as slices of a tensor \mathcal{H}. A tensor, in this context, just means a multidimensional array of numbers. Just as a vector is a one-dimensional array and a matrix is a two-dimensional array, a tensor could have any number of dimensions. In our case, we need just three. If we use a MATLAB-esque indexing notation, \mathcal{H} is a (n-1)/2\times (n-1)/2 \times 3 array given by

    \[\mathcal{H}(:,:,\ell) = H^{(\ell)} \quad \textnormal{for} \quad \ell=1,2,3.\]

The remarkable thing is that the source signals can be determined (under appropriate conditions) by computing a special kind of Vandermonde decomposition of the tensor \mathcal{H}! (Specifically, the required decomposition is a Vandermonde-structured (L_r,L_r,1)-block term decomposition of the tensor \mathcal{H}.) Even more cool, this decomposition can be computed using general-purpose software like Tensorlab.

If this sounds interesting, I would encourage you to check out my recently published paper (L_r,L_r,1)-decompositions, sparse component analysis, and the blind separation of sums of exponentials, joint work with Nithin Govindajaran and Lieven De Lathauwer and recently published in the SIAM Journal on Matrix Analysis and Applications. In the paper, we explain what this (L_r,L_r,1)-decomposition is and how applying it to \mathcal{H} can be used to separate mixtures of exponentials signals from the resulting Vandermonde structure, an idea originating in the work of De Lathauewer. A very important question for these signal separation problems is that of uniqueness. Given the three sampled recordings (comprising the tensor \mathcal{H}), is there just one way of unscrambling the mixtures into different sources or multiple? If there are multiple, then we might have possibly computed the wrong one. If there is just a single unscrambling, though, then we’ve done our job and unmixed the scrambled signals. The uniqueness of these tensor decompositions is fairly complicated math, and we survey existing results and prove new ones in this paper.7One of our main technical contributions is a new notion of uniqueness of (L_r,L_r,1)-decompositions which we believe is nicely adapted to the signal separation context. Specfically, we prove mathematized versions of the statement “if the source signals are sufficiently different from each others and the measurements of sufficiently high quality, then the signals can uniquely be separated”.

Conclusions, Loose Ends, and Extensions

The central idea that we’ve been discussing is how it can be useful to convert between a sequence of observations f_0,f_1,\ldots,f_{n-1} and a special matricization of this sequence into a Hankel matrix (either square, as in (3), or rectangular, as in (7)). By manipulating the Hankel matrix, say, by computing its Vandermonde decomposition (5), we learn something about the original signal, namely a representation of the form (2).

This is a powerful idea which appears implicitly or explicitly throughout various subfields of mathematics, engineering, and computation. As with many other useful ideas, this paradigm admits many natural generalizations and extensions. We saw one already in this post, where we extended the Vandermonde decomposition to the realm of tensors to solve signal separation problems. To end this post, I’ll place a few breadcrumbs at the beginning of a few of the trails of these generalizations for any curious to learn more, wrapping up a few loose ends on the way.

Is the Vandermonde Decomposition Unique?
A natural question is whether the Vandermonde decomposition (5) is unique. That is, is it possible that there exists two Vandermonde decompositions

    \[H = V^\top DV = \tilde{V}^\top \tilde{D} \tilde{V}\]

of the same (square) Hankel matrix H? This is equivalent to whether the frequency components z_0,z_1,\ldots can be uniquely determined from the measurements f_0,f_1,\ldots,f_{n-1}.

Fortunately, the Vandermonde decomposition is unique if (and only if) the matrix H is a rank-deficient matrix. Let’s unpack this a little bit. (For those who could use a refresher on rank, I have a blog post on precisely this topic.) Note that the Vandermonde decomposition is a rank factorization8Rank factorizations are sometimes referred to as “rank-revealing factorizations”. I discuss my dispreference for this term in my blog post on low-rank matrices. since V has \rank H rows, V has full (row) rank, and D is invertible. This means that if take enough samples f_0,\ldots,f_{n-1} of a function f(\cdot) which is a (finite) combinations of exponentials, the matrix H will be rank-deficient and the Vandermonde decomposition unique.9The uniqueness of the Vandermonde decomposition can be proven by showing that, in our construction by Prony’s method, the c‘s, z‘s, and d‘s are all uniquely determined. If too few samples are taken, then H does not contain enough information to determine the frequency components of the signal f(\cdot) and thus the Vandermonde decomposition is non-unique.

Does Every Hankel Matrix Have a Vandermonde Decomposition?
This post has exclusively focused on a situation where we are provided with sequence we know to be represented as a mixture of exponentials (i.e., taking the form (2)) from which the existence of the Vandermonde decomposition (5) follows immediately. What if we didn’t know this were the case, and we were just given a (square) Hankel matrix H. Is H guaranteed to possess a Vandermonde decomposition of the form (5)?

Unfortunately, the answer is no; there exist Hankel matrices which do not possess a Vandermonde decomposition. The issue is related to the fact that the appropriate characteristic equation (analogous to (9)) might possess repeated roots, making the solutions to the recurrence (6) not just take the form z^j but also jz^j and perhaps j^2z^j, j^3z^j, etc.

Are There Cases When the Vandermonde Decomposition is Guaranteed To Exist?
There is one natural case when a (square) Hankel matrix is guaranteed to possess a Vandermonde decomposition, namely when the matrix is nonsingular/invertible/full-rank. Despite this being a widely circulated fact, I am unaware of a simple proof for why this is the case. Unfortunately, there is not just one but infinitely many Vandermonde decompositions for a nonsingular Hankel matrix, suggesting these decompositions are not useful for the frequency finding problem that motivated this post.
What If My Hankel Matrix Does Not Possess a Vandermonde Decomposition?
As discussed above, a Hankel matrix may fail to have a Vandermonde decomposition if the characteristic equation (a la (9)) has repeated roots. This is very much analogous to the case of a non-diagonalizable matrix for which the characteristic polynomial has repeated roots. In this case, while diagonalization is not possible, one can “almost-diagonalize” the matrix by reducing it to its Jordan normal form. In total analogy, every Hankel matrix can be “almost Vandermonde decomposed” into a confluent Vandermonde decomposition (a discovery that appears to have been made independently several times). I will leave these links to discuss the exact nature of this decomposition, though I warn any potential reader that these resources introduce the decomposition first for Hankel matrices with infinitely many rows and columns before considering the finite case as we have. One is warned that while the Vandermonde decomposition is always a rank decomposition, the confluent Vandermonde decomposition is not guaranteed to be one.10Rather, the confluent Vandermonde decomposition is a rank decomposition for an infinite extension of a finite Hankel matrix. Consider the Hankel matrix H = \begin{bmatrix} 1 & 0 & \cdots & 0 & 0 \\ 0 & 0 & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 0 & 0 \\ 0 & 0 & \cdots & 0 & 1 \end{bmatrix}.This matrix has rank-two but no rank-two confluent Vandermonde decomposition. The issue is that when extended to an infinite Hankel matrix \begin{bmatrix} 1 & \cdots & 0 & 0 & &\cdots & 1\\ \vdots & \ddots & \vdots &\vdots\\ 0 & \cdots & 0 & 0 & 1\\ 0 & \cdots & 0 & 1 \\ & & 1 & & \ddots \\ \vdots\\ 1\end{bmatrix}, this (infinite!) matrix has a rank exceeding the size of the original Hankel matrix H.
The Toeplitz Vandermonde Decomposition
Just as it proved useful to arrange samples f_0,\ldots, f_{n-1} into a Hankel matrix, it can also be useful to form them into a Toeplitz matrix

    \[T = \begin{bmatrix} f_{(n-1)/2} & f_{(n-1)/2+1} & f_{(n-1)/2+2} & \cdots & f_{n-1} \\\\f_{(n-1)/2-1} & f_{(n-1)/2} & f_{(n-1)/2+1} & \cdots & f_{n-2} \\\\f_{(n-1)/2-2} & f_{(n-1)/2-1} & f_{(n-1)/2} & \cdots & f_{n-3} \\\\\vdots & \vdots & \vdots & \ddots & \vdots \\\\ f_0 & f_1 & f_2 & \cdots & f_{(n-1)/2} \end{bmatrix}.\]

The Toeplitz matrix T has the appealing propery that the matrix–vector product Tx computes a (discrete) convolution of the sampled signal f with the sampled signal x which has all sorts of uses in signal processing and related fields.11I discuss Toeplitz matrices and a fast algorithm to compute the product Tx using the fast Fourier transform more in a blog post I wrote about the subject.

One can interconvert between Hankel and Toeplitz matrices by reversing the order of the rows. As such, to the extent to which Hankel matrices possess Vandermonde decompositions (with all the asterisks and fine print just discussed), Toeplitz matrices do as well but with the rows of the first factor reversed:

    \[T = \operatorname{ReversedRows}(V^\top) \cdot DV.\]

There is a special and important case where more is true. If a Toeplitz matrix is (Hermitian) positive semidefinite, then T always possesses a Vandermonde decomposition of the form

    \[T = V^* D V,\]

where V is a Vandermonde matrix associated with parameters z_0,z_1,\ldots,z_{k-1} which are complex numbers of absolute value one and D is a diagonal matrix with real positive entries.12The keen-eyed reader will note that V appears conjugate transposed in this formula rather than transposed as in the Hankel Vandermonde decomposition (5). This Vandermonde decomposition is unique if and only if T is rank-deficient. Positive semidefinite Toeplitz matrices are important as they occur as autocorrelation matrices which effectively describe the similarity between a signal and different shifts of itself in time. Autocorrelation matrices appear under different names in everything from signal processing to random processes to near-term quantum algorithms (a topic near and dear to my heart). A delightfully simple and linear algebraic derivation of this result is given by Yang and Xie (see Theorem 1).13Unfortunately, Yang and Xie incorrectly claim that every Toeplitz matrix possesses a rank factorization Vandermonde decomposition of the form T = V^* D V where V is a Vandermonde matrix populated with entries on the unit circle and D is a diagonal matrix of possibly *complex* entries. This claim is disproven by the example \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}. This decomposition can be generalized to infinite positive semidefinite Toeplitz matrices (appropriately defined).14Specifically, one can show that an infinite positive semidefinite Toeplitz matrix (appropriately defined) also has a “Vandermonde decomposition” (appropriately defined). This result is often known as Herglotz’s theorem and is generalized by the Bochner–Weil theorem.

Minimal Rank Completions

I’m delighted to share that my first applied mathematics paper, Minimal Rank Completions for Overlapping Blocks, coauthored with Nithin Govindarajan and Shivkumar Chandrasekaran, has been accepted for publication in Linear Algebra and its Applications. (We also have a preprint on arXiv.) In this post, I wanted to share what our paper is about and share an alternate derivation which ended up being cut from the final paper.

Our paper is concerned with the following question: given a collection of matrices which overlap in a particular way, can we choose the region of simultaneous overlap so as to minimize the rank of each of the individual matrices? This is a multi-objective optimization problem, so a priori it is not clear it has a simultaneous solution. So it was of great surprise to us when we discovered that it does. In fact, we were able to find a tractable algorithm to find all solutions to this multi-objective problem.

Our enormous thanks go to the anonymous reviewer, who went above and beyond by discovering and sketching to us a dramatically shorter and more revealing proof1In particular, our original proof only produced some solutions to this problem, where the reviewer’s improved proof strategy allowed us to characterize all solutions. of our main theorem. We are truly indebted to this extraordinary reviewer for the significantly streamlined and more understandable presentation in the current iteration of our paper.

For the remainder of this post, I want to discuss minimal rank completions in a little more generality and present a solution to a special case of a minimal rank completion problem which can be tractably solved. I will present a solution to this problem we discovered which I have not seen elsewhere in the literature. I hope that this perspective will be complementary to existing solutions to this problem published in the literature including the summary of this result in our paper.

Minimal Rank Completion Problems

Minimal rank completions have achieved a lot of buzz in statistics and data science. Given a matrix with unknown entries, it is often very natural to impute the missing entries as those minimizing the rank of the matrix. This is justified by a kind of Occam’s razor argument: big data matrices are very often (approximately) low-rank, so when data is missing, we may assume it takes whatever values are necessary to minimize the rank.

Finding the rank-minimizing choice of missing entries is, in general, NP-hard. However, under certain assumptions, semidefinite programming relaxations can sometimes exactly recover the missing entries.

Alternately, if the missing entries belong to a special pattern, it may be possible to find the rank-minimizing choice of the unknown entries using linear algebraic tools like matrix factorizations. These approaches have significantly more limited domains of applicability than optimization-based approaches, but have advantages in that they

  • can be more tractable as they involve matrix factorizations rather than solving optimization problems;
  • work over arbitrary fields of numbers, such as finite fields appearing in computer science;
  • can find all ways of choosing the entries to minimize the rank, in particular certifying or dis-certifying the uniqueness of the rank-minimizing choice; and
  • are guaranteed to find the rank-minimizer, without any statistical assumptions.

These algebraic methods have their roots in systems theory, integral operators, and rank-structured matrices. The last of these was the application which motivated our interest in the subject.

Our paper concerns an overlapping variant of this problem, where we simultaneously minimize the ranks of several matrices by choosing the entries in the overlap carefully. This problem emerged to us naturally in the construction of certain matrix representations, and we hope it might prove useful in tackling other problems. As it turns out, the overlapping problem can be solved very much in the spirit of a much simpler problem, the block 2\times 2 minimal rank completion problem, which we will spend most of the remainder of this post discussing.

A Solution to the Block 2×2 Case

The block 2\times 2 minimal rank completion problem is as follows: given a partially filled block matrix

(1)   \begin{equation*} \begin{bmatrix} A & B \\ ? & C \end{bmatrix} \end{equation*}

how can the “?” be filled in to minimize the rank of this matrix?

A generalized version of this problem was originally solved by Kaashoek and Woerdeman. An alternate derivation using matrix factorizations is given by Eidelman, Gohberg, and Haimovici, though they only find some of the solutions to this problem. We seek to characterize all ways of choosing the “?” so that the rank is minimize the rank.

Here, I present the solution to this problem which my coauthors and I originally discovered, which is different than the one we summarize in the final version of our paper.2This construction does appear in Section 4 of an initial preprint of ours on rank-structured matrices. This solution is in the spirit of the one by Eidelman, Gohberg, and Haimovici but does produce all solutions.

Let’s start by recalling some facts about linear algebra. Given an m\times n real3The analysis in this discussion holds over any field, but we shall use the real numbers for concreteness. matrix K, one can define its column space, which I will denote \operatorname{Col} K, to be the vector space consist of all linear combinations of the columns of K. From this column space, or indeed any vector subspace of \mathbb{R}^m, one can extract a basis for this subspace, meaning every vector in the subspace can be uniquely decomposed as a linear combination of elements from the basis. In this discussion, we shall always arrange the basis vectors as columns of a basis matrix P. If we instead consider the row space \operatorname{Row} K of K, then we arrange the elements of a basis as rows of a matrix Q.

Before we solve the problem, let’s reason about the lowest possible rank we could possibly expect for the completed matrix. Since the rank of a matrix can be no smaller than the rank of any of its submatrices, clearly we must have that, for any assignment X of the “?”,

(2)   \begin{equation*} \operatorname{rank} \begin{bmatrix} A & B \\ X & C \end{bmatrix} \ge \operatorname{rank} \begin{bmatrix} A & B\end{bmatrix}. \end{equation*}

However, in general, the rank of the completed matrix must be higher than \operatorname{rank} \begin{bmatrix} A & B\end{bmatrix}, as is exemplified when A and B are both zero but C is not. In addition to rank of \begin{bmatrix} A & B\end{bmatrix}, we must also account for rows of \begin{bmatrix} X & C\end{bmatrix} which cannot be written as linear combinations of the rows above, no matter how X is chosen. With a little noodling, one can convince oneself that there are always at least \operatorname{rank} \begin{bmatrix} B \\ C \end{bmatrix} - \operatorname{rank} B such rows, leading to the bound

(3)   \begin{equation*} \operatorname{rank} \begin{bmatrix} A & B \\ X & C \end{bmatrix} \ge \operatorname{rank} \begin{bmatrix} A & B\end{bmatrix} + \operatorname{rank} \begin{bmatrix} B \\ C \end{bmatrix} - \operatorname{rank} B =: r_{\rm opt}. \end{equation*}

We shall show that, by judicious choice of X, Eq. (3) can always be achieved with equality, making r_{\rm opt} the minimal rank for this completion problem.

Let us now find such a rank-minimizing X. The construction begins by considering the column spaces \operatorname{Col} A and \operatorname{Col} B of the matrices A and B. The intersection of these column spaces \operatorname{Col} A \cap \operatorname{Col} B is again a vector subspace, and we choose a basis P_{AB} for it. The subspace \operatorname{Col} A \cap \operatorname{Col} B might have a smaller size than \operatorname{Col} A. Therefore, to find a basis for \operatorname{Col} A, we can extend the basis P_{AB} by adding new columns P_{\overline{A}} to it so that the enlarged matrix \begin{bmatrix} P_{AB} & P_{\overline{A}} \end{bmatrix} forms a basis for \operatorname{Col} A. Similarly, we can find new columns P_{\overline{B}} to add to P_{AB} so that the matrix \begin{bmatrix} P_{AB} & P_{\overline{B}} \end{bmatrix} forms a basis for \operatorname{Col} B.

Now comes something subtle. Since P = \begin{bmatrix} P_{AB} & P_{\overline{A}} \end{bmatrix} forms a basis for \operatorname{Col} A, every column in A can be written as a linear combination of columns in P. In matrix language, this means there exists a matrix Q^\top such that A = PQ^\top—in fact, this exact Q forms a basis for \operatorname{Row} A. Since P is divided into two collections of columns, it is only natural to similarly (and conformally) divide Q into two pieces as Q = \begin{bmatrix} Q_{AB,A} & Q_{\overline{A},A} \end{bmatrix}. Thus, doing the same with B as we did with A, we obtain two matrix factorizations

(4)   \begin{equation*} A = \begin{bmatrix} P_{AB} & P_{\overline{A}} \end{bmatrix} \begin{bmatrix} Q_{AB,A}^\top \\ Q_{\overline{A},A}^\top \end{bmatrix}, \quad B = \begin{bmatrix} P_{AB} & P_{\overline{B}} \end{bmatrix} \begin{bmatrix} Q_{AB,B}^\top \\ Q_{\overline{B},B}^\top \end{bmatrix}. \end{equation*}

Now we do the same song and dance with the row spaces of B and C. Let’s go through the construction somewhat quickly. We start with a basis Q_{BC} for the intersection of \operatorname{Row} B \cap \operatorname{Row} C. We then extend these to bases \begin{bmatrix} Q_{BC} & Q_{\overline{B}} \end{bmatrix} and \begin{bmatrix} Q_{BC} & Q_{\overline{C}} \end{bmatrix} for \operatorname{Row} B and \operatorname{Row} C. From here, we derive the existence of matrix factorizations analogous to Eq. (4):

(5)   \begin{equation*} B = \begin{bmatrix} P_{B,BC} & P_{B,\overline{B}} \end{bmatrix} \begin{bmatrix} Q_{BC}^\top \\ Q_{\overline{B}}^\top \end{bmatrix},\quad C = \begin{bmatrix} P_{C,BC} & P_{C,\overline{C}} \end{bmatrix} \begin{bmatrix} Q_{BC}^\top \\ Q_{\overline{C}}^\top \end{bmatrix}. \end{equation*}

For the next part, we shall take advantage of a powerful fact: if I have a bases P and Q for the row and column spaces of a matrix K, there exists a nonsingular matrix R for which K=PRQ^\top. Applying this fact to the bases \begin{bmatrix} P_{B,BC} & P_{B,\overline{B}} \end{bmatrix} and \begin{bmatrix} Q_{AB,B} & Q_{\overline{B},B} \end{bmatrix} for B‘s column and row spaces, we get the existence of a matrix R = \begin{bmatrix} R_{BC,AB} & R_{\overline{B},AB} \\ R_{BC,\overline{B}} & R_{\overline{B},\overline{B}} \end{bmatrix} such that

(6)   \begin{equation*} B = \begin{bmatrix} P_{B,BC} & P_{B,\overline{B}} \end{bmatrix}\begin{bmatrix} R_{BC,AB} & R_{BC,\overline{B}} \\ R_{\overline{B},AB} & R_{\overline{B},\overline{B}} \end{bmatrix} \begin{bmatrix} Q_{AB,B}^\top \\ Q_{\overline{B},B}^\top \end{bmatrix}. \end{equation*}

We now have all ingredients to describe the solution. Rather than trying to “derive” the solution in a rigorous way, let us try and discover the solution in a non-rigorous way and justify our work later. We’re hoping to find assignments X of the “?” such that \begin{bmatrix} A & B \\ X & C \end{bmatrix} achieves the minimum possible rank r_{\rm opt} defined in Eq. (3). To do so, let’s try and find a rank factorization of the completed matrix and then infer what values X will take. First off, let’s build a rank factorization for \begin{bmatrix} A & B \end{bmatrix} using the building blocks we’ve already built

(7)   \begin{equation*} \begin{bmatrix} A & B \end{bmatrix} = \begin{bmatrix} P_{AB} & P_{\overline{B}} & P_{\overline{A} \end{bmatrix} \begin{bmatrix} Q_{AB,A}^\top & Q_{AB,B}^\top \\ 0 & Q_{\overline{B},B}^\top \\ Q_{\overline{A},A}^\top & 0 \end{bmatrix}. \end{equation*}

Now we want to extend this to a rank factorization for the entire completed matrix. Let’s build up to this in stages, denoting by \star entries whose value we have yet to determine. For us to have a hope of representing the matrix C, we’ll need to somehow add Q^\top_{\overline{C}} into our rank factorization. Doing exactly this, we get the partially specified rank factorization

(8)   \begin{equation*} \begin{bmatrix} A & B \\ X & C \end{bmatrix} = \begin{bmatrix} P_{AB} & P_{\overline{B}} & P_{\overline{A} & 0 \\ \star & \star & \star & \star \end{bmatrix} \begin{bmatrix} Q_{AB,A}^\top & Q_{AB,B}^\top \\ 0 & Q_{\overline{B},B}^\top \\ Q_{\overline{A},A}^\top & 0\\ \star & Q_{\overline{C}}^\top \end{bmatrix}. \end{equation*}

Now, to fill in the second block row of the left factor, we recall that C = P_{C,BC} Q_{BC}^\top + P_{C,\overline{C}} Q_{\overline{C}}^\top. From Eqs. (5) and (6), we deduce that Q_{BC}^\top = R_{BC,AB} Q_{AB,B}^\top + R_{BC,\overline{B}} Q_{\overline{B},B}^\top. Thus, we can fill in more entries:

(9)   \begin{equation*} \begin{bmatrix} A & B \\ X & C \end{bmatrix} = \begin{bmatrix} P_{AB} & P_{\overline{B}} & P_{\overline{A} & 0 \\ P_{C,BC}R_{BC,AB} & P_{C,BC} R_{BC,\overline{B}} & \star & P_{C,\overline{C}} \end{bmatrix} \begin{bmatrix} Q_{AB,A}^\top & Q_{AB,B}^\top \\ 0 & Q_{\overline{B},B}^\top \\ Q_{\overline{A},A}^\top & 0\\ \star & Q_{\overline{C}}^\top \end{bmatrix}. \end{equation*}

With these entries filled in, the remaining \star‘s can be chosen arbitrarily. Assigning names F_{\overline{A}}^\top and F_{\overline{C}} to these free variables, we conclude that

(10)   \begin{equation*} \begin{bmatrix} A & B \\ X & C \end{bmatrix} = \begin{bmatrix} P_{AB} & P_{\overline{B}} & P_{\overline{A} & 0 \\ P_{C,BC}R_{BC,AB} & P_{C,BC} R_{BC,\overline{B}} & F_{\overline{C}} & P_{C,\overline{C}} \end{bmatrix} \begin{bmatrix} Q_{AB,A}^\top & Q_{AB,B}^\top \\ 0 & Q_{\overline{B},B}^\top \\ Q_{\overline{A},A}^\top & 0\\ F_{\overline{A}}^\top & Q_{\overline{C}}^\top \end{bmatrix} \end{equation*}

and

(11)   \begin{equation*} X = P_{C,BC}R_{BC,AB}Q_{AB,A}^\top + F_{\overline{A}} Q_{\overline{A},A}^\top + P_{C,\overline{C}} F_{\overline{C}}^\top. \end{equation*}

From all the analysis previous, we know that all X‘s of the form Eq. (11) solve the minimal rank completion problem, making the completed matrix achieve the minimal rank r_{\rm opt} defined in Eq. (3). With a little more elbow grease, one can also confirm that all such minimal completions X are of the form Eq. (11), proving that Eq. (11) indeed characterizes all solutions to the minimal rank completion problem. This completes the characterization and construction of the complete set of minimizers to the block 2\times 2 minimal rank completion problem.

The Overlapping Block Minimal Rank Completion Problem

If you found this post interesting, be sure to check out our paper (or here on arXiv) for an explanation of a different way of thinking about the solution of the block 2\times 2 minimal rank completion problem and a solution to a more general “overlapping” variant. The treatment should be reasonably self-contained, and we hope the solution to this problem could prove a useful tool in tackling open problems in systems theory and the study of rank-structured matrices.