How Good Can Stochastic Trace Estimates Be?

I am excited to share that our paper XTrace: Making the most of every sample in stochastic trace estimation has been published in the SIAM Journal on Matrix Analysis and Applications. (See also our paper on arXiv.)

Spurred by this exciting news, I wanted to take the opportunity to share one of my favorite results in randomized numerical linear algebra: a “speed limit” result of Meyer, Musco, Musco, and Woodruff that establishes a fundamental limitation on how accurate any trace estimation algorithm can be.

Let’s back up. Given an unknown square matrix A, the trace of A, defined to be the sum of its diagonal entries

    \[\tr(A) \coloneqq \sum_{i=1}^n A_{ii}.\]

The catch? We assume that we can only access the matrix A through matrix–vector products (affectionately known as “matvecs”): Given any vector x, we have access to Ax. Our goal is to form an estimate \hat{\tr} that is as accurate as possible while using as few matvecs as we can get away with.

To simplify things, let’s assume the matrix A is symmetric and positive (semi)definite. The classical algorithm for trace estimation is due to Girard and Hutchinson, producing a probabilistic estimate \hat{\tr} with a small average (relative) error:

    \[\expect\left[\frac{|\hat{\tr}-\tr(A)|}{\tr(A)}\right] \le \varepsilon \quad \text{using } m= \frac{\rm const}{\varepsilon^2} \text{ matvecs}.\]

If one wants high accuracy, this algorithm is expensive. To achieve just a 1% error (\varepsilon=0.01) requires roughly m=10,\!000 matvecs!

This state of affairs was greatly improved by Meyer, Musco, Musco, and Woodruff. Building upon previous work, they proposed the Hutch++ algorithm and proved it outputs an estimate \hat{\tr} satisfying the following bound:

(1)   \[\expect\left[\frac{|\hat{\tr}-\tr(A)|}{\tr(A)}\right] \le \varepsilon \quad \text{using } m= \frac{\rm const}{\varepsilon} \text{ matvecs}.\]

Now, we only require roughly m=100 matvecs to achieve 1% error! Our algorithm, XTrace, satisfies the same error guarantee (1) as Hutch++. On certain problems, XTrace can be quite a bit more accurate than Hutch++.

The MMMW Trace Estimation “Speed Limit”

Given the dramatic improvement of Hutch++ and XTrace over Girard–Hutchinson, it is natural to hope: Is there an algorithm that does even better than Hutch++ and XTrace? For instance, is there an algorithm satisfying an even slightly better error bound of the form

    \[\expect\left[\frac{|\hat{\tr}-\tr(A)|}{\tr(A)}\right] \le \varepsilon \quad \text{using } m= \frac{\rm const}{\varepsilon^{0.999}} \text{ matvecs}?\]

Unfortunately not. Hutch++ and XTrace are essentially as good as it gets.

Let’s add some fine print. Consider an algorithm for the trace estimation problem. Whenever the algorithm wants, it can present a vector x_i and receive back Ax_i. The algorithm is allowed to be adaptive: It can use the matvecs Ax_1,\ldots,Ax_s it has already collected to decide which vector x_{s+1} to present next. We measure the cost of the algorithm in terms of the number of matvecs alone, and the algorithm knows nothing about the psd matrix A other what it learns from matvecs.

One final stipulation:

Simple entries assumption. We assume that the entries of the vectors x_i presented by the algorithm are real numbers between -1 and 1 with up to b digits after the decimal place.

To get a feel for this simple entries assumption, suppose we set b=2. Then (-0.92,0.17) would be an allowed input vector, but (0.232,-0.125) would not be (too many digits after the decimal place). Similarly, (18.3,2.4) would not be valid because its entries exceed 1. The simple entries assumption is reasonable as we typically represent numbers on digital computers by storing a fixed number of digits of accuracy.1We typically represent numbers on digital computers by floating point numbers, which essentially represent numbers using scientific notation like 1.3278123 \times 10^{27}. For this analysis of trace estimation, we use fixed point numbers like 0.23218 (no powers of ten allowed)!

With all these stipulations, we are ready to state the “speed limit” for trace estimation proved by Meyer, Musco, Musco, and Woodruff:

Informal theorem (Meyer, Musco, Musco, Woodruff). Under the assumptions above, there is no trace estimation algorithm producing an estimate \hat{\tr} satisfying

    \[\expect\left[\frac{|\hat{\tr}-\tr(A)|}{\tr(A)}\right] \le \varepsilon \quad \text{using } m= \frac{\rm const}{\varepsilon^{0.999}} \text{ matvecs}.\]

We will see a slightly sharper version of the theorem below, but this statement captures the essence of the result.

Communication Complexity

To prove the MMMW theorem, we have to take a journey to the beautiful subject of communication complexity. The story is this. Alice and Bob are interested in solving a computational problem together. Alice has her input x and Bob has his input y, and they are interested in computing a function f(x,y) of both their inputs.

Unfortunately for the two of them, Alice and Bob are separated by a great distance, and can only communicate by sending single bits (0 or 1) of information over a slow network connection. Every bit of communication is costly. The field of communication complexity is dedicated to determining how efficiently Alice and Bob are able to solve problems of this form.

The Gap-Hamming problem is one example of a problem studied in communication complexity. As inputs, Alice and Bob receive vectors x,y \in \{\pm 1\}^n with +1 and -1 entries from a third party Eve. Eve promises Alice and Bob that their vectors x and y satisfy one of two conditions:

(2)   \[\text{Case 0: } x^\top y \ge\sqrt{n} \quad \text{or} \quad \text{Case 1: } x^\top y \le -\sqrt{n}. \]

Alice and Bob must work together, sending as few bits of communication as possible, to determine which case they are in.

There’s one simple solution to this problem: First, Bob sends his whole input vector y to Alice. Each entry of y takes one of the two value \pm 1 and can therefore be communicated in a single bit. Having received y, Alice computes x^\top y, determines whether they are in case 0 or case 1, and sends Bob a single bit to communicate the answer. This procedure requires n+1 bits of communication.

Can Alice and Bob still solve this problem with many fewer than n bits of communication, say \sqrt{n} bits? Unfortunately not. The following theorem of Chakrabati and Regev shows that roughly n bits of communication are needed to solve this problem:

Theorem (Chakrabati–Regev). Any algorithm which solves the Gap-Hamming problem that succeeds with at least 2/3 probability for every pair of inputs x and y (satisfying one of the conditions (2)) must take \Omega(n) bits of communication.

Here, \Omega(n) is big-Omega notation, closely related to big-O notation \order(n) and big-Theta notation \Theta(n). For the less familiar, it can be helpful to interpret \Omega(n), \order(n), and \Theta(n) as all standing for “proportional to n”. In plain language, the theorem of Chakrabati and Regev result states that there is no algorithm for the Gap-Hamming problem that much more effective than the basic algorithm where Bob sends his whole input to Alice (in the sense of requiring less than \order(n) bits of communication).

Reducing Gap-Hamming to Trace Estimation

This whole state of affairs is very sad for Alice and Bob, but what does it have to do with trace estimation? Remarkably, we can use hardness of the Gap-Hamming problem to show there’s no algorithm that fundamentally improves on Hutch++ and XTrace. The argument goes something like this:

  1. If there were a trace estimation algorithm fundamentally better than Hutch++ and XTrace, we could use it to solve Gap-Hamming in fewer than \order(n) bits of communication.
  2. But no algorithm can solve Gap-Hamming in fewer than \order(n) bits or communication.
  3. Therefore, no trace estimation algorithm is fundamentally better than Hutch++ and XTrace.

Step 2 is the work of Chakrabati and Regev, and step 3 follows logically from 1 and 2. Therefore, we are left to complete step 1 of the argument.

Protocol

Assume we have access to a really good trace estimation algorithm. We will use it to solve the Gap-Hamming problem. For simplicity, assume n is a perfect square. The basic idea is this:

  • Have Alice and Bob reshape their inputs x,y \in \{\pm 1\}^n into matrices X,Y\in\{\pm 1\}^{\sqrt{n}\times \sqrt{n}}, and consider (but do not form!) the positive semidefinite matrix

        \[A = (X+Y)^\top (X+Y).\]

  • Observe that

        \[\tr(A) = \tr(X^\top X) + 2\tr(X^\top Y) + \tr(Y^\top Y) = 2n + 2(x^\top y).\]

    Thus, the two cases in (2) can be equivalently written in terms of \tr(A):

    (2′)   \[\text{Case 0: } \tr(A)\ge 2n + 2\sqrt{n} \quad \text{or} \quad \text{Case 1: } \tr(A) \le 2n-2\sqrt{n}. \]

  • By working together, Alice and Bob can implement a trace estimation algorithm. Alice will be in charge of running the algorithm, but Alice and Bob must work together to compute matvecs. (Details below!)
  • Using the output of the trace estimation algorithm, Alice determines whether they are in case 0 or 1 (i.e., where \tr(A) \gg 2n or \tr(A) \ll 2n) and sends the result to Bob.

To complete this procedure, we just need to show how Alice and Bob can implement the matvec procedure using minimal communication. Suppose Alice and Bob want to compute Az for some vector z with entries between -1 and 1 with up to b decimal digits. First, convert z to a vector w\coloneqq 10^b z whose entries are integers between -10^b and 10^b. Since Az = 10^{-b}Aw, interconverting between Az and Aw is trivial. Alice and Bob’s procedure for computing Aw is as follows:

  • Alice sends Bob w.
  • Having received w, Bob forms Yw and sends it to Alice.
  • Having received Yw, Alice computes v\coloneqq Xw+Yw and sends it to Bob.
  • Having received v, Bob computes Y^\top v and sends its to Alice.
  • Alice forms Aw = X^\top v + Y^\top v.

Because X and Y are \sqrt{n}\times \sqrt{n} and have \pm 1 entries, all vectors computed in this procedure are vectors of length \sqrt{n} with integer entries between -4n 10^b and 4n10^b. We conclude the communication cost for one matvec is T\coloneqq\Theta((b+\log n)\sqrt{n}) bits.

Analysis

Consider an algorithm we’ll call BestTraceAlgorithm. Given any accuracy parameter \varepsilon > 0, BestTraceAlgorithm requires at most m = m(\varepsilon) matvecs and, for any positive semidefinite input matrix A of any size, produces an estimate \hat{\tr} satisfying

(3)   \[\expect\left[\frac{|\hat{\tr}-\tr(A)|}{\tr(A)}\right] \le \varepsilon.\]

We assume that BestTraceAlgorithm is the best possible algorithm in the sense that no algorithm can achieve (3) on all (positive semidefinite) inputs with m' < m matvecs.

To solve the Gap-Hamming problem, Alice and Bob just need enough accuracy in their trace estimation to distinguish between cases 0 and 1. In particular, if

    \[\left| \frac{\hat{\tr} - \tr(A)}{\tr(A)} \right| \le \frac{1}{\sqrt{n}},\]

then Alice and Bob can distinguish between cases 0 and 1 in (2′)

Suppose that Alice and Bob apply trace estimation to solve the Gap-Hamming problem, using m matvecs in total. The total communication is m\cdot T = \order(m(b+\log n)\sqrt{n}) bits. Chakrabati and Regev showed that Gap-Hamming requires cn bits of communication (for some c>0) to solve the Gap-Hamming problem with 2/3 probability. Thus, if m\cdot T < cn, then Alice and Bob fail to solve the Gap-Hamming problem with at least 1/3 probability. Thus,

    \[\text{If } m < \frac{cn}{T} = \Theta\left( \frac{\sqrt{n}}{b+\log n} \right), \quad \text{then } \left| \frac{\hat{\tr} - \tr(A)}{\tr(A)} \right| > \frac{1}{\sqrt{n}} \text{ with probability at least } \frac{1}{3}.\]

The contrapositive of this statement is that if

    \[\text{If }\left| \frac{\hat{\tr} - \tr(A)}{\tr(A)} \right| \le \frac{1}{\sqrt{n}}\text{ with probability at least } \frac{2}{3}, \quad \text{then } m \ge \Theta\left( \frac{\sqrt{n}}{b+\log n} \right).\]


Say Alice and Bob run BestTraceAlgorithm with parameter \varepsilon = \tfrac{1}{3\sqrt{n}}. Then, by (3) and Markov’s inequality,

    \[\left| \frac{\hat{\tr} - \tr(A)}{\tr(A)} \right| \le \frac{1}{\sqrt{n}} \quad \text{with probability at least }\frac{2}{3}.\]

Therefore, BestTraceAlgorithm requires at least

    \[m \ge \Theta\left( \frac{\sqrt{n}}{b+\log n} \right) \text{ matvecs}.\]

Using the fact that we’ve set \varepsilon = 1/3\sqrt{n}, we conclude that any trace estimation algorithm, even BestTraceAlgorithm, requires

    \[m \ge \Theta \left( \frac{1}{\varepsilon (b+\log(1/\varepsilon))} \right) \text{ matvecs}.\]

In particular, no trace estimation algorithm can achieve mean relative error \varepsilon using even \order(1/\varepsilon^{0.999}) matvecs. This proves the MMMW theorem.

Five Interpretations of Kernel Quadrature

I’m excited to share that my paper Kernel quadrature with randomly pivoted Cholesky, joint with Elvira Moreno, has been accepted to NeurIPS 2023 as a spotlight.

Today, I want to share with you a little about the kernel quadrature problem. To avoid this post getting too long, I’m going to write this post assuming familiarity with the concepts of reproducing kernel Hilbert spaces and Gaussian processes.

Integration and Quadrature

Integration is one of the most widely used operations in mathematics and its applications. As such, it is a basic problem of wide interest to develop numerical methods for evaluating integrals.

In this post, we will consider a quite general integration problem. Let \Omega\subseteq \real^d be a domain and let \mu be a (finite Borel) measure on \Omega. We consider the task of evaluating

    \[I[f] = \int_\Omega f(x) g(x) \, \mathrm{d}\mu(x).\]

One can imagine that g, \mu, and \Omega are fixed, but we may want to evaluate this same integral I[f] for multiple different functions f.

To evaluate, we will design a quadrature approximation to the integral I[f]:

    \[\hat{I}_{w,s}[f] = \sum_{i=1}^n w_i f(s_i) \approx I[f].\]

Concretely, we wish to find real numbers w = (w_1,\ldots,w_n) \in \real^n and points s = (s_1,\ldots,s_n) \in \Omega^n such that the approximation \hat{I}_{w,s}[f] \approx I[f] is accurate.

Smoothness and Reproducing Kernel Hilbert Spaces

As is frequently the case in computational mathematics, the accuracy we can expect for this integration problem depends on the smoothness of the integrand f. The more smooth f is, the more accurately we can expect to compute I[f] for a given budget of computational effort.

In this post, will measure smoothness using the reproducing kernel Hilbert space (RKHS) formalism. Let \mathcal{H} be an RKHS with norm \norm{\cdot}. We can interpret the norm as assigning a roughness \norm{f} to each function f. If \norm{f} is large, then f is rough; if \norm{f} is small, then f is smooth.

Associated to the RKHS \mathcal{H} is the titular reproducing kernel k. The kernel is a bivariate function k:\Omega\times\Omega\to\real. It is related to the RKHS inner product \langle\cdot,\cdot\rangle by the reproducing property

    \[f(x)=\langle f, k(x,\cdot)\rangle \quad \text{for every }f\in\mathcal{H},x\in\Omega.\]

Here, k(x,\cdot) represents the univariate function obtained by setting the first input of k to be x.

The Ideal Weights

To design a quadrature rule, we have to set the nodes s = (s_1,\ldots,s_n) \in \Omega^n and weights w = (w_1,\ldots,w_n)\in\real^n. Let’s first assume that the nodes s are fixed, and talk about how to pick the weights w.

There’s one choice of weights w^\star that we’ll called the ideal weights. There (at least) are five equivalent ways of characterizing the ideal weights. We’ll present all of them. As an exercise, you can try and convince yourself that these characterizations are equivalent, giving rise to the same weights.

Interpretation 1: Exactness

A standard way of designing quadrature rules is to make them exact (i.e., error-free) for some class of functions. For instance, many classical quadrature rules are exact for polynomials of degree up to n-1.

For kernel quadrature, it makes sense to design the quadrature rule to be exact for the kernel function at the selected nodes. That is, we require

    \[\hat{I}_{w_\star,s}[k(s_i,\cdot)]=I[k(s_i,\cdot)] \quad \text{for } i=1,2,\ldots,n.\]

Enforcing exactness gives us n linear equations for the n unknowns w^\star_1,\ldots,w^\star_n:

    \[\sum_{j=1}^n k(s_i,s_j)w^\star_j = \int_\Omega k(s_i,x) g(x)\,\mathrm{d}\mu(x) \quad \text{for }i=1,2,\ldots,n.\]

Under mild conditions, this system of linear equations is uniquely solvable, and the solution w^\star\in\real^n is the ideal weights.

Interpretation 2: Interpolate and Integrate

Here’s another very classical way of designing a quadrature rule. First, interpolate the function values (s_i,f(s_i)) at the nodes, obtaining an interpolant \hat{f}. Then, obtain an approximation to the integral by integrating the interpolant:

    \[\hat{I}_{w^\star,s}[f] \coloneqq \int_\Omega \hat{f}(x) g(x) \, \mathrm{d}\mu(x).\]


In our context, the appropriate interpolation method is kernel interpolation.1Kernel interpolation is also called Gaussian process regression or kriging though (confusingly) these terms can also refer to slightly different methods. It is the regularization-free limit of kernel ridge regression. The kernel interpolant is defined to be the minimum-norm function \hat{f} that interpolates the data:

    \[\hat{f} = \argmin \{ \norm{h} : h(s_i) = f(s_i) \text{ for } i=1,\ldots,n\}.\]

Remarkably, this infinite-dimensional problem has a tractably computable solution. In fact, \hat{f} is the unique function of the form

    \[\hat{f} = \sum_{i=1}^n \alpha_i k(\cdot,s_i)\]

that agrees with f on the points s_1,\ldots,s_n.With a little algebra, you can show that the integral of \hat{f} is

    \[I[\hat{f}] = \sum_{i=1}^n w^\star_i f(s_i),\]

where w^\star are the ideal weights.

Interpretation 3: Minimizing the Worst-Case Error

Define the worst-case error of weights w and nodes s to be

    \[\operatorname{Err}(w,s)=\sup_{\norm{f}\le 1}\left| I[f] - \hat{I}_{w,s}[f]\right|.\]

The quantity \operatorname{Err}(w,s) is the highest possible quadrature error for a function f\in\mathcal{H} of norm at most 1.

Having defined the worst-case error, the ideal weights are precisely the weights that minimize this quantity

    \[w^\star=\operatorname*{argmin}_{w\in\real^n}\operatorname{Err}(w,s).\]

Interpretation 4: Minimizing the Mean-Square Error

The next two interpretations of the ideal weights will adopt a probabilistic framing. A Gaussian process is a random function f such that f’s values at any collection of points are (jointly) Gaussian random variables. We write f\sim \operatorname{GP}(0,k) for a mean-zero Gaussian process with covariance function k:

    \[\Cov(f(x),f(y))=k(x,y)\quad \text{for every } x,y\in\Omega.\]


Define the mean-square quadrature error of weights w and nodes s to be

    \[\operatorname{MSE}(w,s)\coloneqq \expect_{f\sim\operatorname{GP}(0,k)} \left( I[f] - \hat{I}_{w,s}[f] \right)^2.\]

The mean-square error reports the expected squared quadrature error over all functions f drawn from a Gaussian process with covariance function k.

Pleasantly, the mean-square error is equal ro the square of the worst-case error

    \[\operatorname{MSE}(w,s)=\operatorname{Err}(w,s)^2.\]

As such, the ideal weights also minimize the mean-square error

    \[w^\star=\operatorname*{argmin}_{w\in\real^n}\operatorname{MSE}(w,s).\]

Interpretation 5: Conditional Expectation

For our last interpretation, again consider a Gaussian process h\sim \operatorname{GP}(0,k). The integral of this random function I[h] is a random variable. To numerically integrate a function f, compute the expectation of I[h] conditional on h agreeing with f at the quadrature nodes:

    \[\hat{I}_{w^\star,s}[f]\coloneqq \expect_{h\sim\operatorname{GP}(0,k)}[I[h] \mid h(s_i)=f(s_i) \text{ for } i=1,\ldots,n].\]

One can show that this procedure yields the quadrature scheme with the ideal weights.

Conclusion

We’ve just seen five sensible ways of defining the ideal weights for quadrature in a general reproducing kernel Hilbert space. Remarkably, all five lead to exactly the same choice of weights. To me, these five equivalent characterizations give me more confidence that the ideal weights really are the “right” or “natural” choice for kernel quadrature.

That said, there are other reasonable requirements that we might want to impose on the weights. For instance, if \mu is a probability measure and g\equiv 1, it is reasonable to add an additional constraint that the weights w lie in the probability simplex

    \[w\in\Delta\coloneqq \left\{ p\in\real^n_+ : \sum_{i=1}^n p_i = 1\right\}.\]

With this additional stipulation, a quadrature rule can be interpreted as integrating f against a discrete probability measure \ \hat{\mu}=\sum_{i=1}^n w_i\delta_{s_i}; thus, in effect, quadrature amounts to approximating one probability measure \mu by another \hat{\mu}. Additional constraints such as these can easily be imposed when using the optimization characterizations 3 and 4 of the ideal weights. See this paper for details.

What About the Nodes?

We’ve spent a lot of time talking about how to pick the quadrature weights, but how should we pick the nodes s\in\Omega^n? To pick the nodes, it seems sensible to try and minimize the worst-case error \operatorname{Err}(w^\star,s) with the ideal weights w^\star. For this purpose, we can use the following formula:

    \[\operatorname{Err}(w^\star,s) = \norm{\int_\Omega (k(\cdot,x) - \hat{k}_s(\cdot,x)) g(x) \, \mathrm{d}\mu(x)}.\]

Here, \hat{k}_s is the Nyström approximation to the kernel k induced by the nodes s, defined to be

    \[\hat{k}_s(x,y) = k(x,s) k(s,s)^{-1} k(s,y).\]

We have written k(s,s) for the kernel matrix with ij entry k(s_i,s_j) and k(x,s) and k(s,y) for the row and column vectors with ith entry k(x,s_i) and k(s_i,y).

I find the appearance of the Nyström approximation in this context to be surprising and delightful. Previously on this blog, we’ve seen (column) Nyström approximation in the context of matrix low-rank approximation. Now, a continuum analog of the matrix Nyström approximation has appeared in the error formula for numerical integration.

The appearance of the Nyström approximation in the kernel quadrature error \operatorname{Err}(w^\star,s) also suggests a strategy for picking the nodes.

Node selection strategy. We should pick the nodes s to make the Nyström approximation \hat{k}_s \approx k as accurate as possible.

The closer \hat{k}_s is to k, the smaller the function k(\cdot,x) - \hat{k}_s(\cdot,x) is and, thus, the smaller the error

    \[\operatorname{Err}(w^\star,s) = \norm{\int_\Omega (k(\cdot,x) - \hat{k}_s(\cdot,x)) g(x) \, \mathrm{d}\mu(x)}.\]

Fortunately, we have randomized matrix algorithms for picking good nodes for matrix Nyström approximation such as randomly pivoted Cholesky, ridge leverage score sampling, and determinantal point process sampling; maybe these matrix tools can be ported to the continuous kernel setting?

Indeed, all three of these algorithms—randomly pivoted Cholesky, ridge leverage score sampling, and determinantal point process sampling—have been studied for kernel quadrature. The first of these algorithms, randomly pivoted Cholesky, is the subject of our paper. We show that this simple, adaptive sampling method produces excellent nodes for kernel quadrature. Intuitively, randomly pivoted Cholesky is effective because it is repulsive: After having picked nodes s_1,\ldots,s_i, it has a high probability of placing the next node s_{i+1} far from the previously selected nodes.

The following image shows 20 nodes selected by randomly pivoted Cholesky in a crescent-shaped region. The cyan–pink shading denotes the probability distribution for picking the next node. We see that the center of the crescent does not have any nodes, and thus is most likely to receive a node during the next round of sampling.

In our paper, we demonstrate—empirically and theoretically—that randomly pivoted Cholesky produces excellent nodes for quadrature. We also discuss efficient rejection sampling algorithms for sampling nodes with the randomly pivoted Cholesky distribution. Check out the paper for details!

Which Sketch Should I Use?

This is the second of a sequence of two posts on sketching, which I’m doing on the occasion of my new paper on the numerical stability of the iterative sketching method. For more on what sketching is and how it can be used to solve computational problems, I encourage you to check out the first post.

The goals of this post are more narrow. I seek to answer the question:

Which sketching matrix should I use?

To cut to the chase, my answer to this question is:

Sparse sign embeddings are a sensible default for sketching.

There are certainly cases when sparse sign embeddings are not the best type of sketch to use, but I hope to convince you that they’re a good sketching matrix to use for most purposes.

Experiments

Let’s start things off with some numerical experiments.1Code for all numerical experiments can be found on the blogpost branch of the Github for my recent paper. We’ll compare three types of sketching matrices: Gaussian embeddings, a subsampled randomized trigonometric transform (SRTT), and sparse sign embeddings. See the last post for descriptions of these sketching matrices. I’ll discuss a few additional types of sketching matrices that require more discussion at the end of this post.

Recall that a sketching matrix S \in \real^{d\times n} seeks to compress a high-dimensional matrix A \in \real^{n\times k} or vector b\in\real^n to a lower-dimensional sketched matrix SA or vector Sb. The quality of a sketching matrix for a matrix A is measured by its distortion \varepsilon, defined to be the smallest number \varepsilon > 0 such that

    \[(1-\varepsilon) \norm{x} \le \norm{Sx} \le (1+\varepsilon) \norm{x} \quad \text{for every } x \in \operatorname{col}(A).\]

Here, \operatorname{col}(A) denotes the column space of the matrix A.

Timing

We begin with timing test. We will measure three different times for each embedding:

  1. Construction. The time required to generate the sketching matrix S.
  2. Vector apply. The time to apply the sketch to a single vector.
  3. Matrix apply. The time to apply the sketch to an n\times 200 matrix.

We will test with input dimension n = 10^6 (one million) and output dimension d = 400. For the SRTT, we use the discrete cosine transform as our trigonometric transform. For the sparse sign embedding, we use a sparsity parameter \zeta = 8.

Here are the results (timings averaged over 20 trials):

Our conclusions are as follows:

  • Sparse sign embeddings are definitively the fastest to apply, being 3–20× faster than the SRTT and 74–100× faster than Gaussian embeddings.
  • Sparse sign embeddings are modestly slower to construct than the SRTT, but much faster to construct than Gaussian embeddings.

Overall, the conclusion is that sparse sign embeddings are the fastest sketching matrices by a wide margin: For an “end-to-end” workflow involving generating the sketching matrix S \in \real^{400\times 10^6} and applying it to a matrix A\in\real^{10^6\times 200}, sparse sign embeddings are 14× faster than SRTTs and 73× faster than Gaussian embeddings.2More timings are reported in Table 1 of this paper, which I credit for inspiring my enthusiasm for the sparse sign embedding l.

Distortion

Runtime is only one measure of the quality of a sketching matrix; we also must care about the distortion \varepsilon. Fortunately, for practical purposes, Gaussian embeddings, SRTTs, and sparse sign embeddings all tend to have similar distortions. Therefore, we are free to use the sparse sign embeddings, as they as typically are the fastest.

Consider the following test. We generate a sparse random test matrix A of size n\times k for n = 10^5 and k = 50 using the MATLAB sprand function; we set the sparsity level to 1%. We then compare the distortion of Gaussian embeddings, SRTTs, and sparse sign embeddings across a range of sketching dimensions d between 100 and 10,000. We report the distortion averaged over 100 trials. The theoretically predicted value \varepsilon = \sqrt{k/d} (equivalently, d=k/\varepsilon^2) is shown as a dashed line.

To me, I find these results remarkable. All three embeddings exhibit essentially the same distortion parameter predicted by the Gaussian theory.

It would be premature to declare success having only tested on one type of test matrix A. Consider the following four test matrices:

  • Sparse: The test matrix from above.
  • Dense: A\in\real^{10^6\times 50} is taken to be a matrix with independent standard Gaussian random values.
  • Khatri–Rao: A\in\real^{50^3\times 50} is taken to be the Khatri–Rao product of three Haar random orthogonal matrices.
  • Identity: A = \twobyone{I}{0} \in\real^{10^6\times 50} is taken to be the 50\times 50 identity matrix stacked onto a (10^6-50)\times 50 matrix of zeros.

The performance of sparse sign embeddings (again with sparsity parameter \zeta = 8) is shown below:

We see that for the first three test matrices, the performance closely follows the expected value \epsilon = \sqrt{k/d}. However, for the last test matrix “Identity”, we see the distortion begins to slightly exceed this predicted distortion for d/k\ge 20.

To improve sparse sign embeddings for higher values of d/k, we can increase the value of the sparsity parameter \zeta. We recommend

    \[\zeta = \max \left( 8 , \left\lceil 2\sqrt{\frac{d}{k}} \right\rceil \right).\]

With this higher sparsity level, the distortion closely tracks \varepsilon = \sqrt{k/d} for all four test matrices:

Conclusion

Implemented appropriately (see below), sparse sign embeddings can be faster than other sketching matrices by a wide margin. The parameter choice \zeta = 8 is enough to ensure the distortion closely tracks \varepsilon = \sqrt{k/d} for most test matrices. For the toughest test matrices, a slightly larger sparsity parameter \zeta = \max(8, \lceil 2\sqrt{d/k}\rceil) can be necessary to achieve the optimal distortion.

While these tests are far from comprehensive, they are consistent with the uniformly positive results for sparse sign embeddings reported in the literature. We believe that this evidence supports the argument that sparse sign embeddings are a sensible default sketching matrix for most purposes.

Sparse Sign Embeddings: Theory and Practice

Given the highly appealing performance characteristics of sparse sign embeddings, it is worth saying a few more words about these embeddings and how they perform in both theory and practice.

Recall that a sparse sign embedding is a random matrix of the form

    \[S = \frac{1}{\sqrt{\zeta}} \begin{bmatrix} s_1 & \cdots & s_n \end{bmatrix}.\]

Each column s_i is an independent and randomly generated to contain exactly \zeta nonzero entries in uniformly random positions. The value of each nonzero entry of s_i is chosen to be either +1 or -1 with 50/50 odds.

Parameter Choices

The goal of sketching is to reduce vectors of length n to a smaller dimension d. For linear algebra applications, we typically want to preserve all vectors in the column space of a matrix A up to distortion \varepsilon > 0:

    \[(1-\varepsilon) \norm{x} \le \norm{Sx} \le (1+\varepsilon) \norm{x} \quad \text{for all }x \in \operatorname{col}(A).\]

To use sparse sign embeddings, we must choose the parameters appropriately:

Given a dimension k and a target distortion \varepsilon, how do we pick d and \zeta?

Based on the experiments above (and other testing reported in the literature), we recommend the following parameter choices in practice:

    \[d = \frac{k}{\varepsilon^2} \quad \text{and} \quad \zeta = \max\left(8,\frac{2}{\varepsilon}\right).\]

The parameter choice \zeta = 8 is advocated by Tropp, Yurtever, Udell, and Cevher; they mention experimenting with parameter values as small as \zeta = 2. The value \zeta = 1 has demonstrated deficiencies and should almost always be avoided (see below). The scaling d \approx k/\varepsilon^2 is derived from the analysis of Gaussian embeddings. As Martinsson and Tropp argue, the analysis of Gaussian embeddings tends to be reasonably descriptive of other well-designed random embeddings.

The best-known theoretical analysis, due to Cohen, suggests more cautious parameter setting for sparse sign embeddings:

    \[d = \mathcal{O} \left( \frac{k \log k}{\varepsilon^2} \right) \quad \text{and} \quad \zeta = \mathcal{O}\left( \frac{\log k}{\varepsilon} \right).\]

The main difference between Cohen’s analysis and the parameter recommendations above is the presence of the \log k factor and the lack of explicit constants in the O-notation.

Implementation

For good performance, it is imperative to store S using either a purpose-built data structure or a sparse matrix format (such as a MATLAB sparse matrix or scipy sparse array).

If a sparse matrix library is unavailable, then either pursue a dedicated implementation or use a different type of embedding; sparse sign embeddings are just as slow as Gaussian embeddings if they are stored in an ordinary non-sparse matrix format!

Even with a sparse matrix format, it can require care to generate and populate the random entries of the matrix S. Here, for instance, is a simple function for generating a sparse sign matrix in MATLAB:

function S = sparsesign_slow(d,n,zeta)
cols = kron((1:n)',ones(zeta,1)); % zeta nonzeros per column
vals = 2*randi(2,n*zeta,1) - 3; % uniform random +/-1 values
rows = zeros(n*zeta,1);
for i = 1:n
   rows((i-1)*zeta+1:i*zeta) = randsample(d,zeta);
end
S = sparse(rows, cols, vals / sqrt(zeta), d, n);
end

Here, we specify the rows, columns, and values of the nonzero entries before assembling them into a sparse matrix using the MATLAB sparse command. Since there are exactly \zeta nonzeros per column, the column indices are easy to generate. The values are uniformly \pm 1/\sqrt{\zeta} and can also be generated using a single line. The real challenge to generating sparse sign embeddings in MATLAB is the row indices, since each batch of \zeta row indices much be chosen uniformly at random between 1 and d without replacement. This is accomplished in the above code by a for loop, generating row indices \zeta at a time using the slow randsample function.

As its name suggests, the sparsesign_slow is very slow. To generate a 200\times 10^7 sparse sign embedding with sparsity \zeta = 8 requires 53 seconds!

Fortunately, we can do (much) better. By rewriting the code in C and directly generating the sparse matrix in the CSC format MATLAB uses, generating this same 200 by 10 million sparse sign embedding takes just 0.4 seconds, a speedup of 130× over the slow MATLAB code. A C implementation of the sparse sign embedding that can be used in MATLAB using the MEX interface can be found in this file in the Github repo for my recent paper.

Other Sketching Matrices

Let’s leave off the discussion by mentioning other types of sketching matrices not considered in the empirical comparison above.

Coordinate Sampling

Another family of sketching matrices that we haven’t talked about are coordinate sampling sketches. A coordinate sampling sketch consists of indices 1\le i_1,\ldots,i_d\le n and weights w_1,\ldots,w_d \in \real. To apply S, we sample the indices i_1,\ldots,i_d and reweight them using the weights:

    \[b \in \real^n \longmapsto Sb = (w_1 b_{i_1},\ldots,w_db_{i_d}) \in \real^d.\]

Coordinate sampling is very appealing: To apply S to a matrix or vector requires no matrix multiplication of trigonometric transforms, just picking out some entries or rows and rescaling them.

In order for coordinate sampling to be effective, we need to pick the right indices. Below, we compare two coordinate sampling sketching approaches, uniform sampling and leverage score sampling (both with replacement), to the sparse sign embedding with the suggested parameter setting \zeta = \max(8,\lceil 2\sqrt{d/k}\rceil) for the hard “Identity” test matrix used above.

We see right away that the uniform sampling fails dramatically on this problem. That’s to be expected. All but 50 of 100,000 rows of A are zero, so picking rows uniformly at random will give nonsense with very high probability. Uniform sampling can work well for matrices A which are “incoherent”, with all rows of A being of “similar importance”.

Conclusion (Uniform sampling). Uniform sampling is a risky method; it works excellently for some problems, but fails spectacularly for others. Use only with caution!

The ridge leverage score sampling method is more interesting. Unlike all the other sketches we’ve discussed in this post, ridge leverage score sampling is data-dependent. First, it computes a leverage score \ell_i for each row of A and then samples rows with probabilities proportional to these scores. For high enough values of d, ridge leverage score sampling performs slightly (but only slightly) worse than the characteristic \varepsilon = \sqrt{k/d} scaling we expect for an oblivious subspace embedding.

Ultimately, leverage score sampling has two disadvantages when compared with oblivious sketching matrices:

  • Higher distortion, higher variance. The distortion of a leverage score sketch is higher on average, and more variable, than an oblivious sketch, which achieve very consistent performance.
  • Computing the leverage scores. In order to implement this sketch, the leverage scores \ell_i have to first be computed or estimated. This is a nontrivial algorithmic problem; the most direct way of computing the leverage scores requires a QR decomposition at \mathcal{O}(nk^2) cost, much higher than other types of sketches.

There are settings when coordinate sampling methods, such as leverage scores, are well-justified:

  • Structured matrices. For some matrices A, the leverage scores might be very cheap to compute or approximate. In such cases, coordinate sampling can be faster than oblivious sketching.
  • “Active learning”. For some problems, each entry of the vector b or row of the matrix A may be expensive to generate. In this case, coordinate sampling has the distinct advantage that computing Sb or SA only requires generating the entries of b or rows of A for the d randomly selected indices i_1,\ldots,i_d.

Ultimately, oblivious sketching and coordinate sampling both have their place as tools in the computational toolkit. For the reasons described above, I believe that oblivious sketching should usually be preferred to coordinate sampling in the absence of a special reason to prefer the latter.

Tensor Random Embeddings

There are a number of sketching matrices with tensor structure; see here for a survey. These types of sketching matrices are very well-suited to tensor computations. If tensor structure is present in your application, I would put these types of sketches at the top of my list for consideration.

CountSketch

The CountSketch sketching matrix is the \zeta = 1 case of the sparse sign embedding. CountSketch has serious deficiencies, and should only be used in practice with extreme care.

Consider the “Identity” test matrix from above but with parameter k = 200, and compare the distortion of CountSketch to the sparse sign embedding with parameters \zeta=2,4,8:

We see that the distortion of the CountSketch remains persistently high at 100% until the sketching dimension d is taken >4300, 20× higher than k.

CountSketch is bad because it requires d to be proportional to k^2/\varepsilon^2 in order to achieve distortion \varepsilon. For all of the other sketching matrices we’ve considered, we’ve only required d to be proportional to k/\varepsilon^2 (or perhaps (k\log k)/\varepsilon^2). This difference between d\propto k^2 for CountSketch and d\propto k for other sketching matrices is a at the root of CountSketch’s woefully bad performance on some inputs.3Here, the symbol \propto is an informal symbol meaning “proportional to”.

The fact that CountSketch requires d\propto k^2 is simple to show. It’s basically a variant on the famous birthday problem. We include a discussion at the end of this post.4In fact, any oblivious sketching matrix with only 1 nonzero entry per column must have d\gtrsim k^2. This is Theorem 16 in the following paper.

There are ways of fixing the CountSketch. For instance, we can use a composite sketch S = S_2 \cdot S_1, where S_1 is a CountSketch of size k^2/\varepsilon^2 \times n and S_2 is a Gaussian sketching matrix of size k/\varepsilon^2 \times k^2/\varepsilon^2.5This construction is from this paper. For most applications, however, salvaging CountSketch doesn’t seem worth it; sparse sign embeddings with even \zeta = 2 nonzeros per column are already way more effective and reliable than a plain CountSketch.

Conclusion

By now, sketching is quite a big field, with dozens of different proposed constructions for sketching matrices. So which should you use? For most use cases, sparse sign embeddings are a good choice; they are fast to construct and apply and offer uniformly good distortion across a range of matrices.

Bonus: CountSketch and the Birthday Problem
The point of this bonus section is to prove the following (informal) theorem:

Let A be the “Identity” test matrix above. If S\in\real^{d\times n} is a CountSketch matrix with output dimension d\ll k^2, then the distortion of S for \operatorname{col}(A) is \varepsilon\ge 1 with high probability.

Let’s see why. By the structure of the matrix A, SA has the form

    \[SA = \begin{bmatrix} s_1 & \cdots & s_k \end{bmatrix}\]

where each vector s_i\in\real^d has a single \pm1 in a uniformly random location j\_i.

Suppose that the indices j_1,\ldots,j_k are not all different from each other, say j_i = j_{i'}. Set x = e_i - e_{i'}, where e_i is the standard basis vector with 1 in position i and zeros elsewhere. Then, (SA)x = 0 but \norm{x} = \sqrt{2}. Thus, for the distortion relation

    \[(1-\varepsilon) \norm{x} =(1-\varepsilon)\sqrt{2} \le 0 = \norm{(SA)x}\]

to hold, \varepsilon \ge 1. Thus,

    \[\prob \{ \varepsilon \ge 1 \} \ge \prob \{ j_1,\ldots,j_k \text{ are not distinct} \}.\]

For a moment, let’s put aside our analysis of the CountSketch, and turn our attention to a famous puzzle, known as the birthday problem:

How many people have to be in a room before there’s at least a 50% chance that two people share the same birthday?

The counterintuitive or “paradoxical” answer: 23. This is much smaller than many people’s intuition, as there are 365 possible birthdays and 23 is much smaller than 365.

The reason for this surprising result is that, in a room of 23 people, there are {23 \choose 2} = 23\cdot 22/2=253 pairs of people. Each pair of people has a 1/365 chance of sharing a birthday, so the expected number of birthdays in a room of 23 people is 253/365 \approx 0.69. Since are 0.69 birthdays shared on average in a room of 23 people, it is perhaps less surprising that 23 is the critical number at which the chance of two people sharing a birthday exceeds 50%.

Hopefully, the similarity between the birthday problem and CountSketch is becoming clear. Each pair of indices j_i and j_{i'} in CountSketch have a 1/d chance of being the same. There are {k\choose 2} \approx k^2/2 pairs of indices, so the expected number of equal indices j_i = j_{i'} is \approx k^2/2d. Thus, we should anticipate d \gtrapprox k^2 is required to ensure that j_1,\ldots,j_k are distinct with high probability.

Let’s calculate things out a bit more precisely. First, realize that

    \[\prob \{ j_1,\ldots,j_k \text{ are not distinct} \} = 1 - \prob \{ j_1,\ldots,j_k \text{ are distinct} \}.\]

To compute the probability that j_1,\ldots,j_k are distinct, imagine introducing each j_i one at a time. Assuming that j_1,\ldots,j_{i-1} are all distinct, the probability j_1,\ldots,j_i are distinct is just the probability that j_i does not take any of the i-1 values j_1,\ldots,j_i. This probability is

    \[\prob\{ j_1,\ldots,j_i \text{ are distinct} \mid j_1,\ldots,j_{i-1} \text{ are distinct}\} = 1 - \frac{i-1}{d}.\]

Thus, by the chain rule for probability,

    \[\prob \{ j_1,\ldots,j_k \text{ are distinct} \} = \prod_{i=1}^k \left(1 - \frac{i-1}{d} \right).\]

To bound this quantity, use the numeric inequality 1-x\le \exp(-x) for every x \in \real, obtaining

    \[\mathbb{P} \{ j_1,\ldots,j_k \text{ are distinct} \} \le \prod_{i=0}^{k-1} \exp\left(-\frac{i}{d}\right) = \exp \left( -\frac{1}{d}\sum_{i=0}^{k-1} i \right) = \exp\left(-\frac{k(k-1)}{2d}\right).\]

Thus, we conclude that

    \[\prob \{ \varepsilon \ge 1 \} \ge 1-\prob \{ j_1,\ldots,j_k \text{ are distinct} \\}\ge 1-\exp\left(-\frac{k(k-1)}{2d}\right).\]

Solving this inequality, we conclude that

    \[\prob\{\varepsilon \ge 1\} \ge \frac{1}{2} \quad \text{if} \quad d \le \frac{k(k-1)}{2\ln 2}.\]

This is a quantitative version of our informal theorem from earlier.

Does Sketching Work?

I’m excited to share that my paper, Fast and forward stable randomized algorithms for linear least-squares problems has been released as a preprint on arXiv.

With the release of this paper, now seemed like a great time to discuss a topic I’ve been wanting to write about for a while: sketching. For the past two decades, sketching has become a widely used algorithmic tool in matrix computations. Despite this long history, questions still seem to be lingering about whether sketching really works:

In this post, I want to take a critical look at the question “does sketching work”? Answering this question requires answering two basic questions:

  1. What is sketching?
  2. What would it mean for sketching to work?

I think a large part of the disagreement over the efficacy of sketching boils down to different answers to these questions. By considering different possible answers to these questions, I hope to provide a balanced perspective on the utility of sketching as an algorithmic primitive for solving linear algebra problems.

Sketching

In matrix computations, sketching is really a synonym for (linear) dimensionality reduction. Suppose we are solving a problem involving one or more high-dimensional vectors b \in \real^n or perhaps a tall matrix A\in \real^{n\times k}. A sketching matrix is a d\times n matrix S \in \real^{d\times n} where d \ll n. When multiplied into a high-dimensional vector b or tall matrix A, the sketching matrix S produces compressed or “sketched” versions Sb and SA that are much smaller than the original vector b and matrix A.

Let \mathsf{E}=\{x_1,\ldots,x_p\} be a collection of vectors. For S to be a “good” sketching matrix for \mathsf{E}, we require that S preserves the lengths of every vector in \mathsf{E} up to a distortion parameter \varepsilon>0:

(1)   \[(1-\varepsilon) \norm{x}\le\norm{Sx}\le(1+\varepsilon)\norm{x} \]

for every x in \mathsf{E}.

For linear algebra problems, we often want to sketch a matrix A. In this case, the appropriate set \mathsf{E} that we want our sketch to be “good” for is the column space of the matrix A, defined to be

    \[\operatorname{col}(A) \coloneqq \{ Ax : x \in \real^k \}.\]

Remarkably, there exist many sketching matrices that achieve distortion \varepsilon for \mathsf{E}=\operatorname{col}(A) with an output dimension of roughly d \approx k / \varepsilon^2. In particular, the sketching dimension d is proportional to the number of columns k of A. This is pretty neat! We can design a single sketching matrix S which preserves the lengths of all infinitely-many vectors Ax in the column space of A.

Sketching Matrices

There are many types of sketching matrices, each with different benefits and drawbacks. Many sketching matrices are based on randomized constructions in the sense that entries of S are chosen to be random numbers. Broadly, sketching matrices can be classified into two types:

  • Data-dependent sketches. The sketching matrix S is constructed to work for a specific set of input vectors \mathsf{E}.
  • Oblivious sketches. The sketching matrix S is designed to work for an arbitrary set of input vectors \mathsf{E} of a given size (i.e., \mathsf{E} has p elements) or dimension (\mathsf{E} is a k-dimensional linear subspace).

We will only discuss oblivious sketching for this post. We will look at three types of sketching matrices: Gaussian embeddings, subsampled randomized trignometric transforms, and sparse sign embeddings.

The details of how these sketching matrices are built and their strengths and weaknesses can be a little bit technical. All three constructions are independent from the rest of this article and can be skipped on a first reading. The main point is that good sketching matrices exist and are fast to apply: Reducing b\in\real^n to Sb\in\real^{d} requires roughly \mathcal{O}(n\log n) operations, rather than the \mathcal{O}(dn) operations we would expect to multiply a d\times n matrix and a vector of length n.1Here, \mathcal{O}(\cdot) is big O notation.

Gaussian Embeddings

The simplest type of sketching matrix S\in\real^{d\times n} is obtained by (independently) setting every entry of S to be a Gaussian random number with mean zero and variance 1/d. Such a sketching matrix is called a Gaussian embedding.2Here, embedding is a synonym for sketching matrix.

Benefits. Gaussian embeddings are simple to code up, requiring only a standard matrix product to apply to a vector Sb or matrix SA. Gaussian embeddings admit a clean theoretical analysis, and their mathematical properties are well-understood.

Drawbacks. Computing Sb for a Gaussian embedding costs \mathcal{O}(dn) operations, significantly slower than the other sketching matrices we will consider below. Additionally, generating and storing a Gaussian embedding can be computationally expensive.

Subsampled Randomized Trigonometric Transforms

The subsampled randomized trigonometric transform (SRTT) sketching matrix takes a more complicated form. The sketching matrix is defined to be a scaled product of three matrices

    \[S = \sqrt{\frac{n}{d}} \cdot R \cdot F \cdot D.\]

These matrices have the following definitions:

  • D\in\real^{n\times n} is a diagonal matrix whose entries are each a random \pm 1 (chosen independently with equal probability).
  • F\in\real^{n\times n} is a fast trigonometric transform such as a fast discrete cosine transform.3One can also use the ordinary fast Fourier transform, but this results in a complex-valued sketch.
  • R\in\real^{d\times n} is a selection matrix. To generate R, let i_1,\ldots,i_d be a random subset of \{1,\ldots,n\}, selected without replacement. R is defined to be a matrix for which Rb = (b_{i_1},\ldots,b_{i_d}) for every vector b.

To store S on a computer, it is sufficient to store the n diagonal entries of D and the d selected coordinates i_1,\ldots,i_d defining R. Multiplication of S against a vector b should be carried out by applying each of the matrices R, F, and D in sequence, such as in the following MATLAB code:

% Generate randomness for S
signs = 2*randi(2,m,1)-3; % diagonal entries of D
idx = randsample(m,d); % indices i_1,...,i_d defining R

% Multiply S against b
c = signs .* b % multiply by D
c = dct(c) % multiply by F
c = c(idx) % multiply by R
c = sqrt(n/d) * c % scale

Benefits. S can be applied to a vector b in \mathcal{O}(n \log n) operations, a significant improvement over the \mathcal{O}(dn) cost of a Gaussian embedding. The SRTT has the lowest memory and random number generation requirements of any of the three sketches we discuss in this post.

Drawbacks. Applying S to a vector requires a good implementation of a fast trigonometric transform. Even with a high-quality trig transform, SRTTs can be significantly slower than sparse sign embeddings (defined below).4For an example, see Figure 2 in this paper. SRTTs are hard to parallelize.5Block SRTTs are more parallelizable, however. In theory, the sketching dimension should be chosen to be d \approx (k\log k)/\varepsilon^2, larger than for a Gaussian sketch.

Sparse Sign Embeddings

A sparse sign embedding takes the form

    \[S = \frac{1}{\sqrt{\zeta}} \begin{bmatrix} s_1 & s_2 & \cdots & s_n \end{bmatrix}.\]

Here, each column s_i is an independently generated random vector with exactly \zeta nonzero entries with random \pm 1 values in uniformly random positions. The result is a d\times n matrix with only \zeta \cdot n nonzero entries. The parameter \zeta is often set to a small constant like 8 in practice.6This recommendation comes from the following paper, and I’ve used this parameter setting successfully in my own work.

Benefits. By using a dedicated sparse matrix library, S can be very fast to apply to a vector b (either \mathcal{O}(n) or \mathcal{O}(n\log k) operations) to apply to a vector, depending on parameter choices (see below). With a good sparse matrix library, sparse sign embeddings are often the fastest sketching matrix by a wide margin.

Drawbacks. To be fast, sparse sign embeddings requires a good sparse matrix library. They require generating and storing roughly \zeta n random numbers, higher than SRTTs (roughly n numbers) but much less than Gaussian embeddings (dn numbers). In theory, the sketching dimension should be chosen to be d \approx (k\log k)/\varepsilon^2 and the sparsity should be set to \zeta \approx (\log k)/\varepsilon; the theoretically sanctioned sketching dimension (at least according to existing theory) is larger than for a Gaussian sketch. In practice, we can often get away with using d \approx k/\varepsilon^2 and \zeta=8.

Summary

Using either SRTTs or sparse maps, a sketching a vector of length n down to d dimensions requires only \mathcal{O}(n) to \mathcal{O}(n\log n) operations. To apply a sketch to an entire n\times k matrix A thus requires roughly \mathcal{O}(nk) operations. Therefore, sketching offers the promise of speeding up linear algebraic computations involving A, which typically take \mathcal{O}(nk^2) operations.

How Can You Use Sketching?

The simplest way to use sketching is to first apply the sketch to dimensionality-reduce all of your data and then apply a standard algorithm to solve the problem using the reduced data. This approach to using sketching is called sketch-and-solve.

As an example, let’s apply sketch-and-solve to the least-squares problem:

(2)   \[\operatorname*{minimize}_{x\in\real^k} \norm{Ax - b}. \]

We assume this problem is highly overdetermined with A having many more rows n than columns m.

To solve this problem with sketch-and-solve, generate a good sketching matrix S for the set \mathsf{E} = \operatorname{col}(\onebytwo{A}{b}). Applying S to our data A and b, we get a dimensionality-reduced least-squares problem

(3)   \[\operatorname*{minimize}_{\hat{x}\in\real^k} \norm{(SA)\hat{x} - Sb}. \]

The solution \hat{x} is the sketch-and-solve solution to the least-squares problem, which we can use as an approximate solution to the original least-squares problem.

Least-squares is just one example of the sketch-and-solve paradigm. We can also use sketching to accelerate other algorithms. For instance, we could apply sketch-and-solve to clustering. To cluster data points x_1,\ldots,x_p, first apply sketching to obtain Sx_1,\ldots,Sx_p and then apply an out-of-the-box clustering algorithms like k-means to the sketched data points.

Does Sketching Work?

Most often, when sketching critics say “sketching doesn’t work”, what they mean is “sketch-and-solve doesn’t work”.

To address this question in a more concrete setting, let’s go back to the least-squares problem (2). Let x_\star denote the optimal least-squares solution and let \hat{x} be the sketch-and-solve solution (3). Then, using the distortion condition (1), one can show that

    \[\norm{A\hat{x} - b} \le \frac{1+\varepsilon}{1-\varepsilon} \norm{Ax - b}.\]

If we use a sketching matrix with a distortion of \varepsilon = 1/3, then this bound tells us that

(4)   \[\norm{A\hat{x} - b} \le 2\norm{Ax_\star - b}. \]

Is this a good result or a bad result? Ultimately, it depends. In some applications, the quality of a putative least-squares solution \hat{x} is can be assessed from the residual norm \norm{A\hat{x} - b}. For such applications, the bound (4) ensures that \norm{A\hat{x} - b} is at most twice \norm{Ax_\star-b}. Often, this means \hat{x} is a pretty decent approximate solution to the least-squares problem.

For other problems, the appropriate measure of accuracy is the so-called forward error \norm{\hat{x} - x_\star}, measuring how close \hat{x} is to x_\star. For these cases, it is possible that \norm{\hat{x} - x_\star} might be large even though the residuals are comparable (4).

Let’s see an example, using the MATLAB code from my paper:

[A, b, x, r] = random_ls_problem(1e4, 1e2, 1e8, 1e-4); % Random LS problem
S = sparsesign(4e2, 1e4, 8); % Sparse sign embedding
sketch_and_solve = (S*A) \ (S*b); % Sketch-and-solve
direct = A \ b; % MATLAB mldivide

Here, we generate a random least-squares problem of size 10,000 by 100 (with condition number 10^8 and residual norm 10^{-4}). Then, we generate a sparse sign embedding of dimension d = 400 (corresponding to a distortion of roughly \varepsilon \approx 1/2). Then, we compute the sketch-and-solve solution and, as reference, a “direct” solution by MATLAB’s \.

We compare the quality of the sketch-and-solve solution to the direct solution, using both the residual and forward error:

fprintf('Residuals: sketch-and-solve %.2e, direct %.2e, optimal %.2e\n',...
           norm(b-A*sketch_and_solve), norm(b-A*direct), norm(r))
fprintf('Forward errors: sketch-and-solve %.2e, direct %.2e\n',...
           norm(x-sketch_and_solve), norm(x-direct))

Here’s the output:

Residuals: sketch-and-solve 1.13e-04, direct 1.00e-04, optimal 1.00e-04
Forward errors: sketch-and-solve 1.06e+03, direct 8.08e-07

The sketch-and-solve solution has a residual norm of 1.13\times 10^{-4}, close to direct method’s residual norm of 1.00\times 10^{-4}. However, the forward error of sketch-and-solve is 1\times 10^3 nine orders of magnitude larger than the direct method’s forward error of 8\times 10^{-7}.

Does sketch-and-solve work? Ultimately, it’s a question of what kind of accuracy you need for your application. If a small-enough residual is all that’s needed, then sketch-and-solve is perfectly adequate. If small forward error is needed, sketch-and-solve can be quite bad.

One way sketch-and-solve can be improved is by increasing the sketching dimension d and lowering the distortion \varepsilon. Unfortunately, improving the distortion of the sketch is expensive. Because of the relation d \approx k /\varepsilon^2, to decrease the distortion by a factor of ten requires increasing the sketching dimension d by a factor of one hundred! Thus, sketch-and-solve is really only appropriate when a low degree of distortion \varepsilon is necessary.

Iterative Sketching: Combining Sketching with Iteration

Sketch-and-solve is a fast way to get a low-accuracy solution to a least-squares problem. But it’s not the only way to use sketching for least-squares. One can also use sketching to obtain high-accuracy solutions by combining sketching with an iterative method.

There are many iterative methods for least-square problems. Iterative methods generate a sequence of approximate solutions x_1,x_2,\ldots that we hope will converge at a rapid rate to the true least-squares solution, x_\star.

To using sketching to solve least-squares problems iteratively, we can use the following observation:

If S is a sketching matrix for \mathsf{E} = \operatorname{col}(A), then (SA)^\top SA \approx A^\top A.

Therefore, if we compute a QR factorization

    \[SA = QR,\]

then

    \[A^\top A \approx (SA)^\top (SA) = R^\top Q^\top Q R = R^\top R.\]

Notice that we used the fact that Q^\top Q = I since Q has orthonormal columns. The conclusion is that R^\top R \approx A^\top A.

Let’s use the approximation R^\top R \approx A^\top A to solve the least-squares problem iteratively. Start off with the normal equations7As I’ve described in a previous post, it’s generally inadvisable to solve least-squares problems using the normal equations. Here, we’re just using the normal equations as a conceptual tool to derive an algorithm for solving the least-squares problem.

(5)   \[(A^\top A)x = A^\top b. \]

We can obtain an approximate solution to the least-squares problem by replacing A^\top A by R^\top R in (5) and solving. The resulting solution is

    \[x_0 = R^{-1} (R^{-\top}(A^\top b)).\]

This solution x_0 will typically not be a good solution to the least-squares problem (2), so we need to iterate. To do so, we’ll try and solve for the error x - x_0. To derive an equation for the error, subtract A^\top A x_0 from both sides of the normal equations (5), yielding

    \[(A^\top A)(x-x_0) = A^\top (b-Ax_0).\]

Now, to solve for the error, substitute R^\top R for A^\top A again and solve for x, obtaining a new approximate solution x_1:

    \[x\approx x_1 \coloneqq x_0 + R^{-\top}(R^{-1}(A^\top(b-Ax_0))).\]

We can now go another step: Derive an equation for the error x-x_1, approximate A^\top A \approx R^\top R, and obtain a new approximate solution x_2. Continuing this process, we obtain an iteration

(6)   \[x_{i+1} = x_i + R^{-\top}(R^{-1}(A^\top(b-Ax_i))).\]

This iteration is known as the iterative sketching method.8The name iterative sketching is for historical reasons. Original versions of the procedure involved taking a fresh sketch S_iA = Q_iR_i at every iteration. Later, it was realized that a single sketch SA suffices, albeit with a slower convergence rate. Typically, only having to sketch and QR factorize once is worth the slower convergence rate. If the distortion is small enough, this method converges at an exponential rate, yielding a high-accuracy least squares solution after a few iterations.

Let’s apply iterative sketching to the example we considered above. We show the forward error of the sketch-and-solve and direct methods as horizontal dashed purple and red lines. Iterative sketching begins at roughly the forward error of sketch-and-solve, with the error decreasing at an exponential rate until it reaches that of the direct method over the course of fourteen iterations. For this problem, at least, iterative sketching gives high-accuracy solutions to the least-squares problem!

To summarize, we’ve now seen two very different ways of using sketching:

  • Sketch-and-solve. Sketch the data A and b and solve the sketched least-squares problem (3). The resulting solution \hat{x} is cheap to obtain, but may have low accuracy.
  • Iterative sketching. Sketch the matrix A and obtain an approximation R^\top R = (SA)^\top (SA) to A^\top A. Use the approximation R^\top R to produce a sequence of better-and-better least-squares solutions x_i by the iteration (6). If we run for enough iterations q, the accuracy of the iterative sketching solution x_q can be quite high.

By combining sketching with more sophisticated iterative methods such as conjugate gradient and LSQR, we can get an even faster-converging least-squares algorithm, known as sketch-and-precondition. Here’s the same plot from above with sketch-and-precondition added; we see that sketch-and-precondition converges even faster than iterative sketching does!

“Does sketching work?” Even for a simple problem like least-squares, the answer is complicated:

A direct use of sketching (i.e., sketch-and-solve) leads to a fast, low-accuracy solution to least-squares problems. But sketching can achieve much higher accuracy for least-squares problems by combining sketching with an iterative method (iterative sketching and sketch-and-precondition).

We’ve focused on least-squares problems in this section, but these conclusions could hold more generally. If “sketching doesn’t work” in your application, maybe it would if it was combined with an iterative method.

Just How Accurate Can Sketching Be?

We left our discussion of sketching-plus-iterative-methods in the previous section on a positive note, but there is one last lingering question that remains to be answered. We stated that iterative sketching (and sketch-and-precondition) converge at an exponential rate. But our computers store numbers to only so much precision; in practice, the accuracy of an iterative method has to saturate at some point.

An (iterative) least-squares solver is said to be forward stable if, when run for a sufficient number q of iterations, the final accuracy \norm{x_q - x_\star} is comparable to accuracy of a standard direct method for the least-squares problem like MATLAB’s \ command or Python’s scipy.linalg.lstsq. Forward stability is not about speed or rate of convergence but about the maximum achievable accuracy.

The stability of sketch-and-precondition was studied in a recent paper by Meier, Nakatsukasa, Townsend, and Webb. They demonstrated that, with the initial iterate x_0 = 0, sketch-and-precondition is not forward stable. The maximum achievable accuracy was worse than standard solvers by orders of magnitude! Maybe sketching doesn’t work after all?

Fortunately, there is good news:

  • The iterative sketching method is provably forward stable. This result is shown in my newly released paper; check it out if you’re interested!
  • If we use the sketch-and-solve method as the initial iterate x_0 = \hat{x} for sketch-and-precondition, then sketch-and-precondition appears to be forward stable in practice. No theoretical analysis supporting this finding is known at present.9For those interested, neither iterative sketching nor sketch-and-precondition are backward stable, which is a stronger stability guarantee than forward stability. Fortunately, forward stability is a perfectly adequate stability guarantee for many—but not all—applications.

These conclusions are pretty nuanced. To see what’s going, it can be helpful to look at a graph:10For another randomly generated least-squares problem of the same size with condition number 10^{10} and residual 10^{-6}.

The performance of different methods can be summarized as follows: Sketch-and-solve can have very poor forward error. Sketch-and-precondition with the zero initialization x_0 = 0 is better, but still much worse than the direct method. Iterative sketching and sketch-and-precondition with x_0 = \hat{x} fair much better, eventually achieving an accuracy comparable to the direct method.

Put more simply, appropriately implemented, sketching works after all!

Conclusion

Sketching is a computational tool, just like the fast Fourier transform or the randomized SVD. Sketching can be used effectively to solve some problems. But, like any computational tool, sketching is not a silver bullet. Sketching allows you to dimensionality-reduce matrices and vectors, but it distorts them by an appreciable amount. Whether or not this distortion is something you can live with depends on your problem (how much accuracy do you need?) and how you use the sketch (sketch-and-solve or with an iterative method).

Markov Musings 4: Should You Be Lazy?

This post is part of a series, Markov Musings, about the mathematical analysis of Markov chains. See here for the first post in the series.


In the previous post, we proved the following convergence results for a reversible Markov chain

    \[\chi^2\left(\rho^{(n)} \, \middle|\middle| \, \pi\right) \le \left( \max \{ \lambda_2, -\lambda_n \} \right)^{2n} \chi^2\left(\rho^{(0)} \, \middle|\middle| \, \pi\right).\]

Here, \rho^{(n)} denotes the distribution of the chain at time n, \pi denotes the stationary distribution, \chi^2(\cdot \mid\mid \cdot) denotes the \chi^2 divergence, and 1 = \lambda_1 \ge \lambda_2 \ge \cdots \ge \lambda_m \ge -1 denote the decreasingly ordered eigenvalues of the Markov transition matrix P. To bound the rate of convergence to stationarity, we therefore must upper bound \lambda_2 and lower bound \lambda_m.

Having to prove separate bounds for two eigenvalues is inconvenient. In the next post, we will develop tools to bound \lambda_2. But what should we do about \lambda_m? Fortunately, there is a trick.

It Pays to be Lazy

Call a Markov chain lazy if, at every step, the chain has at least a 50% chance of staying put. That is, P_{ii} \ge 1/2 for every i.

Any Markov chain can be made into a lazy chain. At every step of the chain, flip a coin. If the coin is heads, perform a step of the Markov chain as normal, drawing the next step randomly according to the transition matrix P. If the coin comes up tails, do nothing and stay put.

Algebraically, the lazy chain described in the previous paragraph corresponds to replacing the original transition matrix P with the lazy transition matrix P^{\rm lazy} \coloneqq (I+P)/2, where I denotes the identity matrix. It is easy to see that the stationary distribution \pi for P is also a stationary distribution for P^{\rm lazy}:

    \[\pi^\top P^{\rm lazy} = \frac{1}{2} \pi^\top P + \frac{1}{2} \pi^\top I = \frac{1}{2} \pi^\top + \frac{1}{2} \pi^\top = \pi^\top.\]

The eigenvalues of the lazy transition matrix are

    \[\lambda_i^{\rm lazy} = \frac{1+\lambda_i}{2}.\]

In particular, since all of the original eigenvalues \lambda_i are \ge -1, all of the eigenvalues of the lazy chain are \ge 0. Thus, the smallest eigenvalue of P^{\rm lazy} is always \ge 0 and thus, for the lazy chain, the convergence is controlled only by \lambda_2^{\rm lazy}:

    \[\chi^2\left(\rho^{(n)} \, \middle|\middle| \, \pi\right) \le \left( \lambda_2^{\rm lazy} \right)^{2n} \chi^2\left(\rho^{(0)} \, \middle|\middle| \, \pi\right).\]

Since it can be easier to establish bounds on the second eigenvalue than on both the second and last, it can be convenient—for theoretical purposes especially—to work with lazy chains, using the construction above to enforce laziness if necessary.

Should You be Lazy?

We’ve seen one theoretical argument for why it pays to use lazy Markov chains. But should we use lazy Markov chains in practice? The answer ultimately depends on how we want to use the Markov chain.

Here are two very different ways we could use a Markov chain:

  • One-shot sampling. Run the Markov chain once for a fixed number of steps n, and use the result as an approximate sample from \pi.

For this use case, it is important that the output is genuinely a sample from \pi, and the possibility of large negative eigenvalues can significantly degrade the convergence. In the extreme case where -1 is an eigenvalue, the chain will fail to converge to stationarity at all. For this application, the lazy approach may make sense.

  • Averaging. Another way we can use a Markov chain is to compute the average of value of a function f(x) over a random variable x\sim\pi drawn from the stationary distribution:

        \[\expect_{x\sim \pi} [f(x)] = \sum_{i=1}^m f(i) \pi_i.\]

    To do this, we approximate this expectation by the time average \hat{f}_N of the value of f over the first N values x_0,x_1,\ldots,x_{N-1} of the chain

        \[\hat{f}_{N}\coloneqq \frac{1}{N}\sum_{n=0}^{N-1} f(x_i).\]

As we shall see, this averaging process converges at an (asymptotically) slower rate for the lazy chain than the original chain. Therefore, for this application, we should typically just run the chain as-is, and not worry about making it lazy.

The Case Against Laziness

To see why being lazy hurts us for the averaging problem, we will use the Markov chain central limit theorem. As with many random averaging processes, we expect that in the limit of a large number of steps N, the Markov chain average

    \[\hat{f}_N = \frac{1}{N}\sum_{n=0}^{N-1} f(x_i)\]

will become approximately normally distributed. This is the content of the Markov chain central limit theorem (CLT):

Informal theorem (Markov chain central limit theorem). Let x_0,x_1,x_2,\ldots denote the states of a Markov chain initialized in the stationary distribution x_0\sim\pi. For a large number N of steps, \hat{f}_N is approximately normally distributed with mean \expect_{x\sim \pi} [f(x)] and variance \sigma^2/N where

(1)   \[\sigma^2 = \Var[f(x_0)] + 2\sum_{n=1}^\infty \Cov(f(x_0),f(x_n)). \]

The Markov chain CLT shows that the variance of the Markov chain average \hat{f}_N depends not only on the variance of f in the stationary distribution, but also the covariance between f(x_0) and later values f(x_n). The faster the covariance decreases, the smaller \sigma^2 will be and thus the smaller the error for the Markov chain average.

More formally, the Markov chain CLT is given by the following result:

Theorem (Markov chain central limit theorem). As N\to\infty,

    \[\frac{\hat{f}_N - \expect_{x \sim \pi} [f(x)]}{\sqrt{N}}\]

converges in distribution to a normal random variable with mean zero and variance \sigma^2, as defined in (1).

To compare the lazy and non-lazy chains, let’s compute the variance parameter \sigma^2 in the Markov chain CLT in terms of the eigenvalues of the chain. For the remainder of this post, let f : \{1,\ldots,m\} \to \real be any function.

Step 1: Spectral Decomposition

Recall that, by the spectral theorem, the transition matrix P has eigenvectors \varphi_1,\varphi_2,\ldots,\varphi_m that are orthonormal in the \pi-inner product

    \[\langle \varphi_i,\varphi_j \rangle = \expect_{x\sim \pi} [\varphi_i(x)\varphi_j(x)] = \begin{cases}1, & i = j, \\0, & i \ne j.\end{cases}\]

As in the previous post, we think of vectors such as \varphi_i \in \real^m as defining a function \varphi_i : \{1,\ldots,m\} \to \real. Thus, we can expand the function f using the eigenvectors

(2)   \[f = c_1 \varphi_1 + \cdots + c_m \varphi_m. \]

The first eigenvector \varphi_1 = \mathbf{1} is the vector of all ones (or, equivalently, the function that outputs 1 for every output).

Step 2: Applying the Transition Matrix to a Function

The transition matrix P is defined so that if the Markov chain has probability distribution \sigma^\top at time n, then the chain has distribution \sigma^\top P at time n+1. In other words, multiplying by P on the right advances a distribution forward one step in time. This leads us to ask, what is the interpretation of multiplying a function by P on the left. That is, is there an interpretation to the matrix–vector product Pf?

Indeed, there is such an interpretation: The ith coordinate of Pf is the expectation of f(x_1) conditional on the chain starting at x_0 = i:

    \[(Pf)(i) = \expect[f(x_1) \mid x_0 = i].\]

Similarly,

(3)   \[(P^nf)(i) = \expect[f(x_n) \mid x_0 = i]. \]

There’s a straightforward proof of this fact. Let \delta_i^\top denote a probability distribution which places 100% of the probability mass on the single site i. The ith entry of P^n f is

    \[(P^n f)(i) = \delta_i^\top (P^n f) = (\delta_i^\top P^n) f.\]

We know that \rho^{(n)} = \delta_i^\top P^n is the state of the Markov chain after n steps when initialized in the initial distribution \rho^{(n)} = \delta_i. Thus,

    \[(P^n f)(i) = (\rho^{(n)})^\top f = \sum_{i=1}^m \rho^{(n)}_i f(i) = \expect[f(x_n) \mid x_0=i].\]

This proves the formula (3).

Step 3: Computing the Covariance

With the help of the formula for P^nf and the spectral decomposition of f, we are ready to compute the covariances appearing in (1).Let x_0 \sim \pi. Then

(4)   \[\Cov (f(x_0),f(x_n)) = \expect[f(x_0)f(x_n)] - \expect[f(x_0)]\expect[f(x_n)]. \]

Let’s first compute \expect[f(x_0)] and \expect[f(x_n)]. Since \varphi_1 = \mathbf{1} is the vector/function of all ones, we have

    \[\expect[f(x_0)] = \expect[f(x_0) \cdot 1] = \expect[f(x_0) \varphi_1(x_0)] = \langle f, \varphi_1\rangle = c_1.\]

For the last equality, we use the spectral decomposition (2) along with the orthonormality of the eigenvectors \varphi_i.

Since we assume the chain is initialized in the stationary distribution x_0 \sim \pi, it remains in stationarity at time n, x_n \sim \pi, so we have \expect[f(x_n)] = \expect[f(x_0)] = c_1.

Now, let’s compute \expect[f(x_0)f(x_n)]. Use the law of total expectation to write

    \begin{align*}\expect[f(x_0)f(x_n)] &= \sum_{i=1}^m \expect[f(x_0) f(x_n) \mid x_0 = i] \prob\{x_0 = i\} \\&= \sum_{i=1}^m f(i) \expect[f(x_n) \mid x_0 = i] \pi_i.\end{align*}


Now, invoke (3) and the definition of the \pi-inner product to write

    \[\expect[f(x_0)f(x_n)] = \sum_{i=1}^m f(i) (P^n f)(i) \pi_i = \langle f, P^n f\rangle.\]

Finally, using the spectral decomposition, we obtain

    \[\expect[f(x_0)f(x_n)] = \left\langle \sum_{i=1}^m c_i \varphi_i,\sum_{i=1}^m c_i P^n\varphi_i \right\rangle = \left\langle \sum_{i=1}^m c_i \varphi_i,\sum_{i=1}^m c_i \lambda_i^n \varphi_i \right\rangle = \sum_{i=1}^m \lambda_i^n \, c_i^2.\]

Combining this formula with our earlier observation that \expect[f(x_0)] = \expect[f(x_n)] = c_1 and plugging into (4), we obtain

(5)   \[\Cov(f(x_0),f(x_n)) = \sum_{i=1}^m \lambda_i^n \, c_i^2 - c_1^2 = \sum_{i=2}^m \lambda_i^n c_i^2. \]

For the second equality, we use that the largest eigenvalue is \lambda_1 = 1, so c_1 entirely drops out of the covariance.

Step 4: Wrapping it Up

With the formula (5) for the covariance in hand, we are ready to evaluate the \sigma^2 variance parameter in the Markov chain CLT. First, note that the variance is

    \[\Var[f(x_0)] = \Cov(f(x_0),f(x_0)) = \sum_{i=2}^m \lambda_i^0 c_i^2 = \sum_{i=2}^m c_i^2.\]

Therefore, combining this formula for the variance and the formula (5) for the covariance, we obtain

    \begin{align*}\sigma^2 &= \Var[f(x_0)] + 2\sum_{n=1}^\infty \Cov(f(x_0),f(x_n)) \\&= \sum_{i=2}^m c_i^2 + 2\sum_{n=1}^\infty \sum_{i=2}^m \lambda_i^n \, c_i^2 \\&= -\sum_{i=2}^m c_i^2 + 2\sum_{i=2}^m \left(\sum_{n=0}^\infty \lambda_i^n\right)c_i^2 .\end{align*}


Now, apply the formula for the sum of an infinite geometric series to obtain

(6)   \[\sigma^2 = -\sum_{i=2}^m c_i^2 + 2\sum_{i=2}^m \frac{1}{1-\lambda_i} c_i^2 = \sum_{i=2}^m \left(\frac{2}{1-\lambda_i}-1\right)c_i^2 = \sum_{i=2}^m \frac{1+\lambda_i}{1-\lambda_i} c_i^2. \]

Conclusion: Laziness is (Asymptotically) Bad for Averaging

Before we get to the conclusions, let’s summarize where we are. We are seeking to use Markov chain average

    \[\hat{f}_N = \frac{1}{N} \sum_{i=0}^{N-1} f(x_n)\]

as an approximation to the stationary mean \expect_{x \sim \pi} [f(x)]. The Markov chain central limit theorem shows that, for a large number of steps N, the error \hat{f}_N - \expect_{x\sim\pi}[f(x)] is approximately normally distributioned with mean zero and variance \sigma^2/N. So to obtain accurate estimates, we want \sigma^2 to be as small as possible.

After some work, we were able to derive a formula (6) for \sigma^2 in terms of the eigenvalues \lambda_1,\ldots,\lambda_m of the transition matrix P. The formula for \sigma^2 for the lazy chain is identical, except with each eigenvalue replaced by (\lambda_i+1)/2. Thus, we have

    \begin{align*}\sigma^2_{\rm nonlazy} &= \sum_{i=2}^m \frac{1+\lambda_i}{1-\lambda_i} c_i^2, \\\sigma^2_{\rm lazy} &= \sum_{i=2}^m \frac{3+\lambda_i}{1-\lambda_i} c_i^2.\end{align*}


From these formulas, it is clear that the lazy Markov chain has a larger \sigma^2 parameter and is thus less accurate than the non-lazy Markov chain, no matter what the eigenvalues are.

To compare the non-lazy and lazy chains in another way, consider the plot below. The blue solid line shows the amplification factor (1-\lambda_i)/(1+\lambda_i) of an eigenvalue \lambda_i, which represents amount by which the squared coefficient c_i^2 is scaled by in \sigma^2. In the red-dashed line, we see the corresponding amplification factor (1-\lambda_i^{\rm lazy})/(1+\lambda^{\rm lazy}_i) for the corresponding eigenvalue \lambda^{\rm lazy}_i = (\lambda_i+1)/2 of the lazy chain. We see that at every \lambda value, the lazy chain has a higher amplification factor than the original chain.

Remember that our original motivation for using the lazy chain was to remove the possibility of slow convergence of the chain to stationarity because of negative eigenvalues. But for the averaging application, negative eigenvalues are great. The process of Markov chain averaging shrinks the influence of negative eigenvalues on \sigma^2, whereas positive eigenvalues are amplified. For the averaging application, negative eigenvalues for the chain are a feature, not a bug.

Moral of the Story

Much of Markov chain theory, particularly in theoretical parts of computer science, mathematics, and machine learning, is centered around proving convergence to stationarity. Negative eigenvalues are a genuine obstruction to convergence to stationarity, and using the lazy chain in practice may be a sensible idea if one truly needs a sample from the stationary distribution in a given application.

But one-shot sampling is not the only or even the most common uses for Markov chains in computational practice. For other applications, such as averaging, the negative eigenvalues are actually a help. Using the lazy chain in practice for these problems would be a poor idea.

To me, the broader lesson in this story is that, as applied mathematicians, it can be inadvisable to become too fixated on one particular mathematical benchmark for designing and analyzing algorithms. Proving rapid convergence to stationarity with respect to the total variation distance is one nice way to analyze Markov chains. But it isn’t the only way, and chains not possessing this property because of large negative eigenvalues may actually be better in practice for some problems. Ultimately, applied mathematics should, at least in part, be guided by applications, and paying attention to how algorithms are used in practice should inform how we build and evaluate new ones.

Markov Musings 3: Spectral Theory

This post is part of a series, Markov Musings, about the mathematical analysis of Markov chains. See here for the first post in the series.


In the previous two posts, we proved the fundamental theorem of Markov chains using couplings and discussed the use of couplings to bound the mixing time, the time required for the chain to mix to near-stationarity.

In this post, we will continue this discussion by discussing spectral methods for understanding the convergence of Markov chains.

Mathematical Setting

In this post, we will study a reversible Markov chain on \{1,\ldots,m\} with transition probability matrix P \in \real^{m\times m} and stationary distribution \pi. Our goal will be to use the eigenvalues and eigenvectors of the matrix P to understand the properties of the Markov chain.

Throughout our discussion, it will be helpful to treat vectors f\in\real^m and functions f:\{1,\ldots,m\}\to \real as being one and the same, and we will use both f_i and f(i) to denote the ith entry of the vector f (aka the evaluation of the function f at i). By adopting this perspective, a vector is not merely a list of m numbers, but instead labels each state i=1,2,\ldots,m with a numeric value.

Given a function/vector f, we let \expect[f] denote the expected value of f(X) where X is drawn from \pi:

    \[\expect[f] \coloneqq \expect_{X \sim \pi} [f(X)] = \sum_{i=1}^m \pi_i f(i).\]

The variance is defined similarly:

    \[\Var(f) \coloneqq \expect [(f - \expect[f])^2] = \sum_{i=1}^m (f(i) - \expect[f])^2 \pi_i.\]

As a final piece of notation, we let \delta_i denote a probability distribution which assigns 100% probability to outcome i.

Spectral Theory

The eigenvalues and eigenvectors of general square matrices are ill-behaved creatures. Indeed, a general m\times m matrix with real entries can have complex-valued eigenvalues and fail to possess a full suite of m linearly independent eigenvectors. The situation is dramatically better for symmetric matrices which obey the spectral theorem:

Theorem (spectral theorem for real symmetric matrices). An m\times m real symmetric matrix A (i.e., one satisfying A^\top=A) has m real eigenvalues \lambda_1,\ldots,\lambda_m and m orthonormal eigenvectors u_1,\ldots,u_m.

Unfortunately for us, the transition matrix P for a reversible Markov chain is not always symmetric. But despite this, there’s a surprise: P always has real eigenvalues. This leads us to ask:

Why does are the eigenvalues of the transition matrix P real?

To answer this question, we will need to develop and a more general version of the spectral theorem and use our standing assumption that the Markov chain is reversible.

The Transpose

In our quest to develop a general version of the spectral theorem, we look more deeply into the hypothesis of the theorem, namely that A is equal to its transpose A^\top. Let’s first ask: What does the transpose A^\top mean?

Recall that \real^m is equipped with the standard inner product, sometimes called the Euclidean or dot product. We denote this product by (\cdot,\cdot) and it is defined as

    \[(f,g) \coloneqq f^\top g = \sum_{i=1}^m f(i) g(i).\]

We can think of the standard inner product as defining the amount of the vector f that points in the direction of g.

The transpose of a matrix A is closely related to the standard inner product. Specifically, A^\top is the (unique) matrix satisfying the adjoint property:

    \[(f,Ag) = (A^\top f, g)\quad \text{for every }f,g\in\real^m.\]

So the amount of f in the direction of Ag is the same as the amount of A^\top f in the direction of g.

The Adjoint and the General Spectral Theorem

Since the transpose is intimately related to the standard inner product on \real^m, it is natural to wonder if non-standard inner products on \real^m give rise to non-standard versions of the transpose. This idea proves to be true.

Definition (adjoint). Let [\cdot,\cdot] be any inner product on \real^m and let A be an m\times m matrix. The adjoint of A with respect to the inner product [\cdot,\cdot] is the (unique) matrix A^* such that

    \[[f,Ag]=[A^*f,g]\quad\text{for every }f,g\in\real^m.\]

A matrix A is self-adjoint if it equals its own adjoint, A=A^*.

The spectral theorem naturally extends to the more abstract setting of adjoints with respect to non-standard inner products:

Theorem (general spectral theorem). Let A be an m\times m matrix which is set-adjoint with respect to an inner product [\cdot,\cdot]. Then A has m real eigenvalues \lambda_1,\ldots,\lambda_m and m eigenvectors u_1,\ldots,u_m which are [\cdot,\cdot]orthonormal:

    \[[u_i,u_j]=\begin{cases}1, & i=j, \\0,& i\ne j.\end{cases}\]

Reversibility and Self-Adjointness

The general spectral theorem offers us a path to understand our observation from above that the eigenvalues of P are always real. Namely, we ask:

Is there an inner product under which P is self-adjoint?

Fortunately, the answer is yes. Define the \pi-inner product

    \[\langle f, g \rangle \coloneqq \expect[f\cdot g] = \sum_{i=1}^m f(i)g(i) \pi_i.\]

This inner product is very natural. If f and g are mean-zero (\expect[f] = \expect[g] = 0), it reports the covariance of f(x) and g(x) where x \sim \pi is drawn from \pi.

Let us compute the adjoint of the transition matrix P in the \pi-inner product \langle \cdot,\cdot \rangle:

    \begin{align*}\langle f, Pg \rangle&= \sum_{i=1}^m f(i) \left(\sum_{j=1}^m P_{ij} g(i) \right)\pi_i= \sum_{i,j=1}^m f(i) g(j) \pi_iP_{ij} .\end{align*}

Recall that we have assumed our Markov chain is reversible. Thus, the detailed balance condition

    \[\pi_i P_{ij} = \pi_j P_{ji} \quad \text{for } i,j=1,2,\ldots,m\]

implies that

    \[\langle f, Pg \rangle= \sum_{i,j=1}^m f(i) g(j) \pi_jP_{ji} = \sum_{j=1}^m \left( \sum_{i=1}^m f(i) P_{ij} \right) g(j) \pi_j = \langle Pf, g\rangle.\]

Thus, we conclude that P is self-adjoint.

We cannot emphasize enough that the reversibility assumption is crucial to ensure that P is self-adjoint. In fact,

Theorem (reversibility and self-adjointness). The transition matrix of a general Markov chain with stationary distribution \pi is self-adjoint in the \pi-inner product if and only if the chain is reversible.

Spectral Theorem for Markov Chains

Since P is self-adjoint in the \pi-inner product, the general spectral theorem implies the following

Corollary (spectral theorem for reversible Markov chain). The reversible Markov transition matrix P has m real eigenvalues \lambda_1 \ge \cdots \ge \lambda_m associated with eigenvectors \varphi_1,\ldots,\varphi_m which are orthonormal in the \pi-inner product:

    \[\langle\varphi_i,\varphi_j\rangle=\begin{cases}1, &i=j\\0,& i\ne j.\end{cases}\]

Since P is a reversible Markov transition matrix—not just any self-adjoint matrix—the eigenvalues of P satisfy additional properties:

  • Bounded. All of the eigenvalues of P lie between -1 and 1.1Here’s an argument. The vectors \delta_1,\ldots,\delta_m span all of \real^m, where \delta_i denotes a vector with 1 in position i and 0 elsewhere. Then, by the fundamental theorem of Markov chains, \delta_i^\top P^n converges to \pi^\top for every i. In particular, for any vector \alpha, \alpha^\top P^n = \sum_{i=1}^m \alpha_i (\delta_i^\top P^n) = (\sum_{i=1}^m \alpha_i) \pi^\top. Thus, since \limsup_{n\to\infty} \norm{\alpha^\top P^n} < +\infty for every vector \alpha, all of the eigenvalues must be \le 1 in magnitude.
  • Eigenvalue 1. \lambda_1 = 1 is an eigenvalue of P with eigenvector \varphi_1 = \mathbf{1}, where \mathbf{1} is a vector of all one’s.

For a primitive chain, we can also have the property:

  • Contractive. For a primitive chain, all eigenvalues other than \lambda_1 have magnitude < 1.2This follows \delta_i^\top P^n \to \pi^\top for every i=1,2,\ldots,m, so \pi^\top must be the unique left eigenvector with eigenvalue of modulus \ge 1.

Distance Between Probability Distributions Redux

In the previous post, we used the total variation distance to compare probability distributions:

Definition (total variation distance). The total variation distance between probability distributions \sigma and \pi is

    \[\norm{\sigma - \pi}_{\rm TV} = \max_{A \subseteq \{1,\ldots,m\}} |\sigma(A) - \pi(A)| = \frac{1}{2} \sum_{i=1}^m |\sigma_i - \pi_i|.\]

The total variation distance is an \ell_1 way of comparing two probability distributions since it can be computed by adding the absolute difference between the probabilities \sigma_i and \pi_i of each outcome. Spectral theory plays more nicely with an “\ell_2” way of comparing probability distributions, which we develop now.

Densities

Given two probability densities \sigma and \pi, the density of \sigma with respect to \pi is the function3Note that we typically associate probability distributions \sigma and \pi with row vectors, whereas the density f is a function which we identify with column vectors. For those interested and familiar with measure theory, here is a good reason why this makes sense. In the continuum setting, probability distributions \sigma and \pi are measures whereas the density f remains a function, known as the Radon–Nikodym derivative f = d\sigma/d\pi. This provides a general way of figuring out which objects for finite state space Markov chains are row vectors and which are column vectors: Measures are row vectors whereas functions are column vectors. h = d\sigma/d\pi given by

    \[h(i) = \frac{d\sigma}{d\pi}(i) = \frac{\sigma_i}{\pi_i} \quad \text{for } i=1,2,\ldots,m.\]

The density h = d\sigma/d\pi satisfies the property that

(1)   \[\expect_{X \sim \sigma} [g(X)] = \expect_{Y \sim \pi}[g(Y)h(Y)] = \sum_{i=1}^m g(i)h(i) \pi_i. \]

To see why we call h = d\sigma/d\pi a density, it may be helpful to appeal to continuous probability for a moment. If 0\le X \le 1 is a random variable with distribution \sigma, the probability density function of \sigma (with respect to the uniform distribution \mathrm{Unif}[0,1]) is a function h such that

    \[\expect_{X \sim \sigma} [g(X)] = \expect_{Y\sim \mathrm{Unif}[0,1]} [g(Y)h(Y)] = \int_0^1 g(x) h(x) \, dx.\]

The formula for the continuous case is exactly the same as the finite case (1) with sums \sum_{i=1}^m (\cdots) \pi_i replaced with integrals \int_0^1 (\cdots) \, dx.

The Chi-Squared Divergence

We are now ready to introduce our “\ell_2” way of comparing probability distributions.

Definition (\chi^2-divergence). The \chi^2-divergence of \sigma with respect to \pi is the variance of the density function:

    \[\chi^2(\sigma \mid\mid \pi) \coloneqq \Var \left(\frac{d\sigma}{d\pi} \right) = \expect \left[\left( \frac{d\sigma}{d\pi} - 1 \right)^2\right] = \expect \left[\left(\frac{d\sigma}{d\pi}\right)^2\right] - 1.\]

To see the last equality is a valid formula for \chi^2(\sigma\mid\mid\pi) \coloneqq \Var(d\sigma/d\pi), note that

(2)   \[\expect\left[\frac{d\sigma}{d\pi}\right] = \sum_{i=1}^m \frac{d\sigma}{d\pi}(i) \pi_i = \sum_{i=1}^m \frac{\sigma_i}{\pi_i} \pi_i = \sum_{i=1}^m \sigma_i = 1. \]

Relations Between Distance Measures

The \chi^2 divergence always gives an upper bound on the total variation distance. Indeed, first note that we can express the total variation distance as

    \[\norm{\sigma - \pi}_{\rm TV} = \frac{1}{2} \expect \left[ \left| \frac{d\sigma}{d\pi} - 1 \right| \right].\]

Thus, using Lyapunov’s inequality, we conclude

(3)   \[\norm{\sigma - \pi}_{\rm TV} = \frac{1}{2} \expect \left[ \left| \frac{d\sigma}{d\pi} - 1 \right| \right] \le \frac{1}{2} \left(\expect \left[ \left( \frac{d\sigma}{d\pi} - 1 \right)^2 \right]\right)^{1/2} = \frac{1}{2} \sqrt{\chi^2(\sigma \mid\mid \pi)}. \]

Markov Chain Convergence by Eigenvalues

Now, we prove a quantitative version of the fundamental theorem of Markov chains (for reversible processes) using spectral theory and eigenvalues:4Note that we used the fundamental theorem of Markov chains in above footnotes to prove the “bounded” and “contractive” properties, so, at present, this purported proof of the fundamental theorem would be circular. Fortunately, we can establish these two claims independently of the fundamental theorem, say, by appealing to the Perron–Frobenius theorem. Thus, this argument does give a genuine non-circular proof of the fundamental theorem.

Theorem (Markov chain convergence by eigenvalues). Let \rho^{(0)},\rho^{(1)},\rho^{(2)},\ldots denote the distributions of the Markov chain at times 0,1,2,\ldots. Then

    \[\chi^2\left(\rho^{(n)} \, \middle|\middle| \, \pi\right) \le \left( \max \{ \lambda_2, -\lambda_n \} \right)^{2n} \chi^2\left(\rho^{(0)} \, \middle|\middle| \, \pi\right).\]

This raises a natural question: How large can the initial \chi^2 divergence between \rho^{(0)} and \pi be? This is answered by the next result.

Proposition (Initial distance to stationarity). For any i \in \{1,\ldots,m\},

(4)   \[\chi^2(\delta_i \mid\mid \pi) \le \frac{1}{\pi_i}. \]

For any initial distribution \rho^{(0)}, we have

(5)   \[\chi^2\left( \rho^{(0)} \, \middle|\middle| \, \pi \right) \le \frac{1}{\min_{1\le i\le m} \pi_i} \]

The condition (4) is a direct computation

    \[\chi^2(\delta_i \mid\mid \pi) = \Var(d\delta_i/d\pi) \le \expect[(d\delta_i/d\pi)^2] = 1/\pi_i.\]

For (5), observe that \chi^2(\rho^{(0)} \mid\mid \pi) is a convex function of the initial distribution \rho^{(0)}. Therefore, its maximal value over the convex set of all probability distribution is maximized at an extreme point \rho^{(0)} = \delta_i for some i. Consequently, (5) follows directly from (4).

Using the previous two results and equation (3), we immediately obtain the following:

Corollary (Mixing time). When initialized from a distribution \rho^{(0)}, the chain mixes to \varepsilon-total variation distance to stationarity

    \[\norm{\rho^{(n)} - \pi}_{\rm TV} \le \varepsilon\]

after

(6)   \[n = \left\lceil \frac{1}{\log(\max\{\lambda_2,-\lambda_n\})}\log \left( \frac{1}{2\varepsilon \min_{1\le i \le m}\sqrt{\pi_i}} \right) \right\rceil \text{ steps}. \]

Indeed,

    \begin{align*}\norm{\rho^{(n)} - \pi}_{\rm TV} &\le \frac{1}{2} \sqrt{\chi^2(\rho^{(n)} \mid\mid \pi)} \\&\le \frac{1}{2} (\max\{\lambda_2,-\lambda_n\})^n \sqrt{\chi^2(\rho^{(0)} \mid\mid \pi)} \\&\le \frac{(\max\{\lambda_2,-\lambda_n\})^n}{2\min_{1\le i \le m} \sqrt{\pi_i}}.\end{align*}


Equating the left-hand side with \varepsilon and rearranging gives (6).

Proof of Markov Chain Convergence Theorem

We conclude with a proof of the Markov chain convergence result.

Recurrence for Densities

Our first task is to derive a recurrence for the densities d\rho^{(n)}/d\pi. To do this, we use the recurrence for the probability distribution:

    \[\rho^{(n+1)}_j = \sum_{i=1}^m \rho_i^{(n)} P_{ij}.\]

Thus,

    \[\left( \frac{d\rho^{(n+1)}}{d\pi}\right)_j = \frac{\rho_j^{(n+1)}}{\pi_j} = \frac{\sum_{i=1}^m \rho^{(n)}_i P_{ij}}{\pi_j}= \sum_{i=1}^m \rho^{(n)}_i\frac{P_{ij}}{\pi_j}.\]

We now invoke the detailed balance P_{ij}/\pi_j = P_{ji} / \pi_i, which implies

    \[\left( \frac{d\rho^{(n+1)}}{d\pi}\right)_j = \sum_{i=1}^m\rho^{(n)}_i  \frac{P_{ji}}{\pi_i} = \sum_{i=1}^m P_{ji} \frac{\rho^{(n)}_i}{\pi_i} = \left( P \frac{d \rho^{(n)}}{d\pi} \right)_j.\]

The product P(d\rho^{(n)}/d\pi) is an ordinary matrix–vector product. Thus, we have shown

(7)   \[\frac{d\rho^{(n+1)}}{d\pi} = P \frac{d\rho^{(n)}}{d\pi} \quad \text{for } n =0,1,\ldots. \]

Spectral Decomposition

Now that we have a recurrence for the densities d\rho^{(n)}/d\pi, we can understand how the densities change in time. Expand the initial density in eigenvectors of P:

    \[\frac{d\rho^{(0)}}{d\pi} = c_1 \mathbf{1} + c_2 \varphi_2 + \cdots + c_m \varphi_m.\]

As we verified above in (1), we have \expect[d\rho^{(0)}/d\pi] = 1. Thus, we conclude c_1 = 1. Since the \varphi_j are eigenvectors of P with eigenvalues \lambda_j, the recurrence (7) implies

    \[\frac{d\rho^{(n)}}{d\pi} = \mathbf{1} + c_2 \lambda_2^n \varphi_2 + \cdots + c_m \lambda_m^n \varphi_m.\]

Conclusion

Finally, we compute

    \[\chi^2(\rho^{(n)} \mid\mid \pi) = \sum_{i=2}^m c_i^2 \lambda_i^{2n} \le (\max \{ \lambda_2, -\lambda_m \})^{2n} \sum_{i=2}^m c_i^2 = (\max \{ \lambda_2, -\lambda_m \})^{2n} \chi^2(\rho^{(0)} \mid\mid \pi).\]

The theorem is proven.

Markov Musings 2: Couplings

This post is part of a series, Markov Musings, about the mathematical analysis of Markov chains. See here for the first post in the series.


In the last post, we saw a proof of the fundamental theorem of Markov chains using the method of couplings. In this post, we will use couplings to provide quantitative bounds on the rate of Markov chain convergence.

Mixing Time

Let us begin where the last post ended by recalling some facts about the mixing time of a Markov chain. Consider a Markov chain x_0,x_1,x_2,\ldots with stationary distribution \pi. Recall that the mixing time is

    \[\tau_{\rm mix} \coloneqq \min \left\{ n \ge 1 : \max_{\rho^{(0)}} \norm{\rho^{(n)} - \pi}_{\rm TV} \le \frac{1}{2e} \right\}.\]


Here, \rho^{(n)} denotes the distribution of the state x_n of the chain at time n and \norm{\cdot}_{\rm TV} is the total variation distance. For more on the total variation distance, see the previous post.

At the end of last post, we saw the following theorem, showing that the mixing time controls the rate of Markov chain convergence.

Theorem (mixing time as convergence rate). For any starting distribution \rho^{(0)},

    \[\norm{\rho^{(n)} - \pi}_{\rm TV} \le e^{-\lfloor n / \tau_{\rm mix}\rfloor}.\]

Consequently, if n\ge \tau_{\rm mix}\cdot \lceil \log(1/\epsilon)\rceil, then \norm{\rho^{(n)} - \pi}_{\rm TV} \le \varepsilon.

This result motivates us to develop techniques to bound the mixing time \tau_{\rm mix}.

Couplings and Convergence

In the previous post, we made essential use of couplings of probability distributions. In this post, we will extend this notion by using couplings of entire Markov chains.

Definition (coupling of Markov chains). A coupling of Markov chains with transition matrix P are two processes x_0,x_1,x_2,\ldots and y_0,y_1,y_2,\ldots such that (A) each process is individually a Markov chain with transition matrix P: In particular,

    \[\prob\{x_{n+1} = j \mid x_n = i \} = \prob\{y_{n+1} = j \mid y_n = i \} = P_{ij}\]

and (B) if x_n = y_n, then x_{n+1} = y_{n+1}.

A coupling of Markov chains thus consists of a pair of Markov chains which become glued together should they ever touch.

There are several ways we can use couplings to bound the mixing time of Markov chains. Here is one way. Let x_0,x_1,x_2,\ldots and y_0,y_1,y_2,\ldots be a coupling of Markov chains for which x_0 is initialized in some initial distribution x_0 \sim \rho^{(0)} and y_0 is initialized in the stationary distribution y_0 \sim \pi. Let \tau_{\rho^{(0)}} be the time at which x and y meet:

    \[\tau_{\rho^{(0)}} \coloneqq \min\{ n \ge 0 : x_n = y_n\}.\]

The following theorem shows that the tails of the random variable \tau control the convergence of the Markov chain x_0,x_1,x_2,\ldots to stationarity.

Theorem (convergence from couplings). Then \norm{\rho^{(n)} - \pi}_{\rm TV} \le \prob\{\tau_{\rho^{(0)}} > n\}.

This result is an immediate application of the coupling lemma from last post. Using this bound, we can bound the mixing time

Corollary (mixing time from couplings). \tau_{\rm mix} \le 2e \max_{\rho^{(0)}} \expect[\tau_{\rho^{(0)}}].

The maximum is taken over all initial distributions \rho^{(0)}.

Indeed, by the above theorem and Markov’s inequality,1For more on concentration inequalities such as Markov’s inequality, see my post on the subject.

    \[\norm{\rho^{(n)} - \pi}_{\rm TV} \le \prob\{\tau_{\rho^{(0)}} > n\}\le \frac{\expect[\tau_{\rho^{(0)}}]}{n}.\]

Take a maximum over initial distribution \rho^{(0)} and set n \coloneqq \tau_{\rm mix}. Then the left-hand side is at most 1/2e, so

    \[\frac{1}{2e} \le \max_{\rho^{(0)}}\norm{\rho^{(\tau_{\rm mix})} - \pi}_{\rm TV} = \prob\{\tau_{\rho^{(0)}} > n\} \le \frac{\max_{\rho^{(0)}}\expect[\tau_{\rho^{(0)}}]}{\tau_{\rm mix}}.\]

Rearranging gives the corollary.

Example: Bit Strings

There are a number of examples2See, for instance, section 5.3 of Markov Chains and Mixing Times. to prove bounds on mixing times using the method of couplings, but we will focus on a simple but illustrative example: a lazy random walk on the set of bit strings.

A bit string is a sequence of 0’s and 1’s. Bit strings are used to store and represent information on digital computers. For instance, the character “a” is stored as “1100001” using the ASCII encoding.

The Chain

Our example is a Markov chain whose states consist of all bit strings of length N. For each step of the chain,

  • With 50% probability, we do nothing and leave the state unchanged.
  • With 50% probability, we choose a (uniform) random position from 1 to N and flip the bit in that position, changing 0 to 1 or 1 to 0.

One can think of this Markov chain as modeling a random process in which a bit string is corrupted by errors which flip a single bit. The stationary distribution for this chain is the uniform distribution on all bit strings (i.e., each bit string of length n has the same probability 2^{-n}). Therefore, one can also think of this Markov chain as a very inefficient way of generating a string of random bits.3Another way of thinking about this Markov chain is as a random walk on the vertices of an n-dimensional hypercube graph. With 50% probability, we stay put, and with 50% probability, we move along a random edge. Because we have a 50% probability of doing nothing, we call this process a lazy random walk.

Designing the Coupling

Let us use the method of couplings to determine how fast this chain mixes. First, observe that there’s an alternative way of stating the transition rule for this Markov chain:

  • Pick a (uniform) random position from 1 to N and set the bit in that position to 0 with 50% probability and 1 with 50% probability.

Indeed, under this rule, half of the time we leave the state unchanged because we set the bit in the selected position to the value it already had. For the other half of the time, we flip the selected bit.

Now, we construct a coupling. Initialize x_0 in an arbitrary distribution \rho^{(0)} and y_0 in the stationary distribution (uniform over all bit strings). The key to construct a coupling will be to use the same randomness to update the state of both the “x” chain and the “y” chain. For this chain, there’s a natural way to do this.

At time n, pick a (uniform) random position 1\le p_n \le N and a (uniform) random bit b_n \in \{0,1\}. Set x_{n+1} by changing the p_nth bit of x_n to b_n, and set y_{n+1} by changing the p_nth bit of y_n to b_n.

We couple the two chains by setting the same position p_n to the same bit b_n.

Bounding the Mixing Time

To bound the mixing time, we need to understand when these two chains meet. Observe that if we pick position p to set at some point, the two Markov chains have the same bit in position p at every time later in the chain. Consequently,

If all of the positions 1,\ldots,N have been chosen by time n, then x_n = y_n.

The two chains might meet before all of the positions have been chosen, but they are guaranteed to meet after.

Set \tau_{\rm pick} to be the time at which all positions 1,\ldots,N have been selected at least once. Then our observation is equivalent to saying that the meeting time \tau_{\rho^{(0)}} is smaller than \tau_{\rm pick},

    \[\tau_{\rho^{(0)}} \le \tau_{\rm pick}.\]

Thus, to bound the mixing time, it is sufficient to compute the expected value of \tau_{\rm pick}.

Collecting Coupons

Computing the expected value of \tau_{\rm pick} is actually an example of a very famous problem in probability theory known as the coupon collector problem. The classic version goes like this:

A cereal company is running a promotion where each box of cereal contains a coupon, which is equally likely to be any of one of N different colors. A coupon of each color can be traded in for a toy. What is the expected number of boxes we need to open in order to qualify for the toy?

The coupon collector is exactly the same as our problem, with the N different colors of coupons playing the same role as the N different positions in our bit strings. Going forward, let’s use the language of coupons and boxes to describe this problem, as its more colorful.

When tackling a hard question like computing \expect[\tau_{\rm pick}], it can be helpful to break into easier pieces. Let’s start with the simplest possible question: How many picks do we need to collect the first coupon? Just one. We always get a coupon in our first box.

With that question answered, let’s ask the next-simplest question: How many picks do we need to collect the second coupon, differently colored than the first? There are N colors and N-1 of them are different from the first. So the probability of picking a new coupon in the second box is (N-1)/N. In fact,

Until we succeed at picking the second unique coupon, every box has an independent (N-1)/N chance of containing such a coupon.

The number of boxes needed is therefore a geometric random variable with success probability p = (N-1)/N, and its mean is 1/p = N/(N-1). On average, we need N/(N-1) picks to collect the second coupon.

The reasoning is the same for the third coupon. There are N-2 coupons we haven’t collected and N total, so the probability of picking the third coupon in each box is (N-2)/N. Thus, the number of picks is a geometric random variable with success probability (N-2)/N with mean N/(N-2).

In general, we need an expected N / (N-k) picks to pick the kth coupon. Adding up the expected number of picks for each k = 1,2,\ldots,N, we obtain that the total number of picks required is to collect all the coupons is

    \[\expect[\tau_{\rm pick}] = 1 + \frac{N}{N-1} + \frac{N}{N-2} + \cdots + \frac{N}{N-(N-1)} = N \left( \frac{1}{N} + \frac{1}{N-1} + \cdots + \frac{1}{2} + 1 \right).\]

More concisely, if we let H_N denote the Nth harmonic number

    \[H_N = 1 + \frac{1}{2} + \cdots + \frac{1}{N}\]

then \expect[\tau_{\rm pick}] = NH_N. Since the Nth harmonic number is at most H_N \le \log(N) + 1, we have

    \[\expect[\tau_{\rm pick}] \le N \log (N) + N.\]

Conclusion

Putting all of these ingredients together, we obtain

    \[\tau_{\rm mix} \le 2e \, \expect[\tau_{\rho^{(0)}}] \le 2e \, \expect[\tau_{\rm pick}] \le 2e \, N \log(N) + 2e \, N.\]

The mixing time is (at most) proportional to N\log N.

More on Bit Strings and Collecting Coupons
Using more sophisticated techniques, it can be shown that the random variable \tau_{\rm pick} satisfies

    \[\prob\{\tau_{\rm pick} > N \log N + cN\} \le e^{-c}\]

for every N\ge 1 and c\ge 0, from which it follows that

    \[\norm{\rho^{(n)} - \pi}_{\rm TV} \le \varepsilon \quad \text{provided that} \quad n \ge N \log N + N \log(1/\varepsilon).\]

Using a more sophisticated analysis—also based on couplings—we can improve this result to

    \[\norm{\rho^{(n)} - \pi}_{\rm TV} \le \varepsilon \quad \text{provided that} \quad n \ge \frac{1}{2} N \log N + c \,N \log(1/\varepsilon)\]

for some constant c > 0. The constant 1/2 in this inequality cannot be improved. See section 5.3.1 of Markov Chains and Mixing Times.

Markov Musings 1: The Fundamental Theorem

For this summer, I’ve decided to open up another little mini-series on this blog called Markov Musings about the mathematical analysis of Markov chains, jumping off from my previous post on the subject. My main goal in writing this is to learn the material for myself, and I hope that what I produce is useful to others. My main resources are:

  1. The book Markov Chains and Mixing Times by Levin, Peres, and Wilmer;
  2. Lecture notes and videos by theoretical computer scientists Sinclair, Oveis Gharan, O’Donnell, and Schulman; and
  3. These notes by Rob Webber, for a complementary perspective from a scientific computing point of view.

Be warned, these posts will be more mathematical in nature than most of the material on my blog.


In my previous post on Markov chains, we discussed the fundamental theorem of Markov chains. Here is a slightly stronger version:

Theorem (fundamental Theorem of Markov chains). A primitive Markov chain on a finite state space has a stationary distribution \pi > 0. When initialized from any starting distribution \rho^{(0)}, the distributions \rho^{(0)},\rho^{(1)},\rho^{(2)},\ldots of the chain at times 0,1,2,\ldots converge at an exponential rate to \pi.

My goal in this post will be to provide a proof of this fact using the method of couplings, adapted from the notes of Sinclair and Oveis Gharan. I like this proof because it feels very probabilistic (as opposed to more linear algebraic proofs of the fundamental theorem).

Here, and throughout, we say a matrix or vector is > 0 if all of its entries are strictly positive. Recall that a Markov chain with transition matrix P is primitive if there exists n for which P^n > 0. For this post, all Markov chains will have state space \{1,\ldots,m\}.

Total Variation Distance

In order to quantify the rate of Markov chain convergence, we need a way of quantifying the closeness of two probability distributions. This motivates the following definition:

Definition (total variation distance). The total variation distance between probability distributions \rho and \sigma on \{1,\ldots,m\} is the maximum difference between the probability of an event S under \rho and under \sigma:

    \[\norm{\rho - \sigma}_{\rm TV} = \max_{S \subseteq \{1,\ldots,m\}} |\rho(S) - \sigma(S)| = \frac{1}{2} \sum_{i=1}^m \left| \rho_i - \sigma_i \right|.\]

The total variation distance is always between 0 and 1. It is zero only when \rho and \sigma are the same distribution. It is one only when \rho and \sigma have disjoint supports—that is, there is no i \in \{1,\ldots,m\} for which \rho_i, \sigma_i > 0.

The total variation distance is a very strict way of comparing two probability distributions. Sinclair’s notes provide a vivid example. Suppose that \rho denotes the uniform distribution on all possible ways of shuffling a deck of N cards, and \sigma denotes the uniform distribution on all ways of shuffling N cards with the ace of spades at the top. Then the total variation distance between \rho and \sigma is 1 - 1/N. Thus, despite these distributions seeming quite similar to us, the total variation distance between \rho and \sigma is almost as far apart as possible. There are a number of alternative ways of measuring the closeness of probability distributions, some of which are less severe.

Couplings

Given a probability distribution \rho, it can be helpful to work with random variables drawn from \rho. Say a random variable x is drawn from the distribution \rho, written x \sim \rho, if

    \[\prob \{x = i\} = \rho_i \quad \text{for $i=1,2,\ldots,m$}.\]

To understand the total variation distance more, we shall need the following definition:

Definition (coupling). Given probability distributions \rho,\sigma on \{1,\ldots,m\}, a coupling \gamma is a distribution on \{1,\ldots,m\}^2 such that if a pair of random variables (x,y)\sim\gamma is drawn from \gamma, then x \sim \rho and y \sim \sigma. Denote the set of all couplings of \rho and \sigma as \operatorname{Couplings}(\rho,\sigma).

More succinctly, a coupling of \rho and \sigma is a joint distribution with marginals \rho and \sigma.

Couplings are related to total variation distance by the following lemma:1A proof is provided in Lemma 4.2 of Oveis Gharan’s notes. The coupling lemma holds in the full generality of probability measures on general spaces, and can be viewed as a special case of the Monge–Kantorovich duality principle of optimal transport. See Theorem 4.13 and Example 4.14 in van Handel’s notes for details.

Lemma (coupling lemma). Let \rho and \sigma be distributions on \{1,\ldots,m\}. Then

    \[\norm{\rho - \sigma}_{\rm TV} = \min_{\gamma \in \operatorname{Couplings}(\rho,\sigma)} \prob_{(x,y) \sim \gamma} \{ x \ne y \}.\]

Here, \prob_{(x,y) \sim \gamma} represents the probability for variables x,y drawn from joint distribution \gamma.

To see a simple example, suppose \rho = \sigma. Then the coupling lemma tells us that there is a coupling \gamma of \rho and itself such that \prob \{ x \ne y \} = 0. Indeed, such a coupling can be obtained by drawing x \sim \rho and setting y \coloneqq x. This defines a joint distribution \gamma under which x = y with 100% probability.

To unpack the coupling lemma a little more, it really contains two statements:

  • For any coupling \gamma between \rho and \sigma and (x,y) \sim \gamma,

        \[\norm{\rho - \sigma}_{\rm TV} \le \prob \{x \ne y \}.\]

  • There exists a coupling \gamma between \rho and \sigma such that when (x,y) \sim \gamma, then

        \[\norm{\rho - \sigma}_{\rm TV} = \prob \{x \ne y\}.\]

We will need both of these statements in our proof of the fundamental theorem.

Proof of the Fundamental Theorem

With these ingredients in place, we are now ready to prove the fundamental theorem of Markov chains. First, we will assume there exists a stationary distribution \pi > 0. We will provide a proof of this fact at the end of this post.

Suppose we initialize the chain in distribution \rho^{(0)}, and let \rho^{(0)},\rho^{(1)},\rho^{(2)},\ldots denote the distributions of the chain at times 0,1,2,\ldots. Our goal will be to establish that \norm{\rho^{(n)} - \pi}_{\rm TV} \to 0 as n\to\infty at an exponential rate.

Distance to Stationarity is Non-Increasing

First, let us establish the more modest claim that \norm{\rho^{(n)} - \pi}_{\rm TV} is non-increasing

(1)   \[\norm{\rho^{(n+1)} - \pi}_{\rm TV} \le \norm{\rho^{(n)} - \pi}_{\rm TV} \quad \text{for every } n =0,1,2\ldots. \]

We shall do this by means of the coupling lemma.

Consider two versions of the chain x_0,x_1,x_2,\ldots and y_0,y_1,y_2,\ldots, one initialized in x_0 \sim \rho^{(0)} and the other initialized with y_0 \sim \pi. We now apply the coupling lemma to the states x_n and y_n of the chains at time n. By the coupling lemma, there exists a coupling of x_n and y_n such that

    \[\norm{\rho^{(n)} - \pi}_{\rm TV} = \prob \{x_n\ne y_n\}.\]

Now construct a coupling of x_{n+1} and y_{n+1} according to the following rules:

  • If x_n = y_n, then draw x_{n+1} according to the transition matrix and set y_{n+1} \coloneqq x_{n+1}.
  • If x_n \ne y_n, then run the two chains independently to generate x_{n+1} and y_{n+1}.

By the way we’ve designed the coupling,

    \[\prob\{x_{n+1}\ne y_{n+1}\} \le \prob\{x_n \ne y_n\}.\]

Thus, by the coupling lemma,

    \[\norm{\rho^{(n+1)} - \pi}_{\rm TV} \le \prob\{x_{n+1}\ne y_{n+1}\} \le \prob\{x_n \ne y_n\} = \norm{\rho^{(n)} - \pi}_{\rm TV}.\]

We have established that the distance to stationarity is non-increasing.

This proof already contains the essence of the argument as to why Markov chains mix. We run two versions of the Markov chain, one initialized in an arbitrary distribution \rho^{(0)} and the other initialized in the stationary distribution \pi. While the states of the two chains are different, we run the chains independently. When the chains meet, we continue moving the chains together in synchrony. After running for long enough, the two chains are likely to meet, implying the chain has mixed.

The All-to-All Case

As another stepping stone to the complete proof, let us prove the fundamental theorem in the special case where there is a strictly positive probability of moving between any two states, i.e., assuming P>0.

Consider the two chains x_0,x_1,x_2,\ldots and y_0,y_1,y_2,\ldots coupled as in the previous section. We compute the probability \prob \{x_{n+1} \ne y_{n+1}\} more carefully. Write it as

(2)   \begin{align*}\prob \{x_{n+1} \ne y_{n+1}\} &= \prob \{x_{n+1} \ne y_{n+1} \mid x_n \ne y_n\}\prob \{x_n \ne y_n\} \\&= (1-\prob \{x_{n+1} = y_{n+1} \mid x_n \ne y_n\})\prob \{x_n \ne y_n\}. \end{align*}


To compute \prob \{x_{n+1} = y_{n+1} \mid x_n \ne y_n\}), break into cases for all possible values i,j,k for y_{n+1},x_n,y_n to take

    \begin{gather*}\prob \{x_{n+1} = y_{n+1} \mid x_n \ne y_n\} \\= \sum_{\substack{i,j,k\in \{1,\ldots,m\}\\ j\ne k}} \prob \{x_{n+1} =i \mid y_{n+1}=i,x_n=j,y_n=k\} \prob \{y_{n+1}=i,x_n=j,y_n=k \mid x_n \ne y_n\}.\end{gather*}

We now are in a place to lower bound this probability. Let p_{\rm min} be the minimum probability of moving between any two states

    \[p_{\rm min} \coloneqq \min_{1\le i,j\le m} P_{ij}.\]

The probability of moving from, j to i is at least p_{\rm min}. We conclude the lower bound

    \begin{align*}\prob \{x_{n+1} = y_{n+1} \mid x_n \ne y_n\} &\ge \sum_{\substack{i,j,k\in \{1,\ldots,m\}\\ j\ne k}} p_{\rm min} \prob \{y_{n+1}=i,x_n=j,y_n=k \mid x_n \ne y_n\} = p_{\rm min}.\end{align*}

Substituting back in (2), we obtain

    \[\prob \{x_{n+1} \ne y_{n+1}\} \le (1 - p_{\rm min})\prob \{x_n \ne y_n\}.\]

By the coupling lemma, we conclude

    \[\norm{\rho^{(n+1)}-\pi}_{\rm TV} \le \prob \{x_{n+1} \ne y_{n+1}\} \le (1 - p_{\rm min})\prob \{x_n \ne y_n\} = (1 - p_{\rm min}) \norm{\rho^{(n)} - \pi}_{\rm TV}.\]

By iteration, we conclude

    \[\norm{\rho^{(n)} - \pi}_{\rm TV} \le (1 - p_{\rm min})^n \norm{\rho^{(0)} - \pi}_{\rm TV} \le (1 - p_{\rm min})^n.\]

The chain converges to stationarity at an exponential rate, as claimed.

The General Case

We’ve now proved the fundamental theorem in the special case when P > 0. Fortunately, together with our earlier observation that distance to stationarity is non-increasing, we can upgrade this proof into a proof for the general case.

We have assumed the Markov chain x_0,x_1,x_2,\ldots is primitive, so there exists a time n_0 for which P^{n_0} > 0. Construct an auxilliary Markov chain z_0,z_1,z_2,\ldots such that one step of the auxilliary chain consists of running n_0 steps of the original chain:

    \[z_0 = x_0, \:z_1 = x_{n_0}, \:z_2 = x_{2n_0},\ldots.\]

By the all-to-all case, we know that z_0,z_1,z_2,\ldots converges to stationarity at an exponential rate. Thus, since the distribution of z_k = x_{k\cdot n_0} is \rho^{(k\cdot n_0)}, we have

    \[\norm{\rho^{(k\cdot n_0)} - \pi}_{\rm TV} \le (1-\delta)^k \norm{\rho^{(0)} - \pi}_{\rm TV} \le (1-\delta)^k \quad \text{for }k=0,1,2,\ldots,\]

where \delta \coloneqq \min_{1\le i,j\le m} (P^{n_0})_{ij} > 0. Thus, since distance to stationarity is non-increasing, we have

    \[\norm{\rho^{(n)} - \pi}_{\rm TV} \le \norm{\rho^{(n_0 \cdot \lfloor n/n_0\rfloor)} - \pi}_{\rm TV} \le (1-\delta)^{\lfloor n/n_0\rfloor} \norm{\rho^{(0)} - \pi}_{\rm TV} \le (1-\delta)^{\lfloor n/n_0\rfloor}.\]

Thus, for any starting distribution \rho^{(0)}, the distribution of the chain \rho^{(n)} at time n converges to stationarity at an exponential rate as n\to\infty, proving the fundamental theorem.

Mixing Time

We’ve proven a quantiative version of the fundamental theorem of Markov chains, showing that the total variation distance to stationarity decreases exponentially as a function of time. For algorithmic applications of Markov chains, we also care about the rate of convergence, as it dictates how long we need to run the chain. To this end, we define the mixing time:

Definition (mixing time). The mixing time \tau_{\rm mix} of a Markov chain is the number of steps required for the distance to stationarity to be at most 1/2e when started from a worst-case distribution:

    \[\tau_{\rm mix} \coloneqq \min \left\{ n \ge 1 : \max_{\rho^{(0)}} \norm{\rho^{(n)} - \pi}_{\rm TV} \le \frac{1}{2e} \right\}.\]

The mixing time controls the rate of convergence for a Markov chain:

Theorem (mixing time as a convergence rate). For any starting distribution,

    \[\norm{\rho^{(n)} - \pi}_{\rm TV} \le e^{-\lfloor n / \tau_{\rm mix}\rfloor}.\]

In particular, for \rho^{(n)} to be within \varepsilon total variation distance of \pi, we only need to run the chain for \tau_{\rm mix} \cdot \lceil \log(1/\varepsilon) \rceil steps:

Corollary (time to mix to \varepsilon-stationarity). If n\ge \tau_{\rm mix} \cdot \lceil \log(1/\epsilon)\rceil, then \norm{\rho^{(n)} - \pi}_{\rm TV} \le \varepsilon.

These results can be proven using very similar techniques to the proof of the fundamental theorem from above. See Sinclair’s notes for more details.

Bonus: Existence of a Stationary Measure
To complete our probabilistic proof of the Markov chain convergence theorem, we must establish the existance of a stationary measure. We do this now.

Fix any state i \in \{1,\ldots,m\}. Imagine starting the chain at i and running it until it reaches i again. Let a_j be the expected number of times the chain hits j in such a process, and set a_i \coloneqq 1. Because the chain is primitive, all of the a_j‘s are well-defined, positive, and finite. Our claim will be that

    \[\pi_i = \frac{a_i}{\sum_{j=1}^m a_j}.\]


is a stationary distribution for the chain. To prove this, it is sufficient to show that

(3)   \[a^\top P = a^\top. \]



Let us prove this. Let x_0 = i,x_1,x_2,\ldots denote the values of the chain and n_{\rm ret} denote the time at which the chain returns to i. By linearity of expectation, the expected number of hits a_j is the sum over all times n of the probability that the chain is at j at time n before hitting i. That is,

    \[a_j = \sum_{n=1}^\infty \prob\{x_n = j, n_{\rm ret} > n\}.\]

Break this sum into two pieces

    \[a_j = \prob\{x_1 = j\} + \sum_{n=2}^\infty \prob\{x_n = j, n_{\rm ret} > n\}.\]

The first term is just the transition probability P_{ij}. For the second term, break into cases depending on the value of the chain at time n-1:

    \begin{align*}\prob\{x_n = j, n_{\rm ret} > n\} &= \sum_{k\ne i} \prob\{x_{n-1} = k, x_n = j, n_{\rm ret} > n-1 \} \\&= \sum_{k\ne i} \prob\{x_{n-1} = k, n_{\rm ret} > n-1 \} \prob\{x_n = j \mid x_{n-1} = k\} \\&= \sum_{k\ne i} \prob\{x_{n-1} = k, n_{\rm ret} > n-1 \} P_{kj}.\end{align*}

Combining these two terms, we get

    \[a_j = P_{ij} + \sum_{n=2}^\infty \sum_{k\ne i} \prob\{x_{n-1} = k, n_{\rm ret} > n-1 \} P_{kj}.\]

Relabel the outer sum to go from n=1 to \infty and exchange the order of summation to obtain

    \[a_j = P_{ij} + \sum_{k\ne i} \left(\sum_{n=1}^\infty \prob\{x_n = k, n_{\rm ret} > n \}\right) P_{kj}.\]

Recognize the term in the parentheses as a_k. Thus, since a_i = 1, we have

    \[a_j = a_iP_{ij} + \sum_{k\ne i} a_k P_{kj} = \sum_{k=1}^m a_k P_{kj},\]

which is exactly the claim (3) we wanted to show.

Low-Rank Approximation Toolbox: Analysis of the Randomized SVD

In the previous post, we looked at the randomized SVD. In this post, we continue this discussion by looking at the analysis of the randomized SVD. Our approach is adapted from a new analysis of the randomized SVD by Joel A. Tropp and Robert J. Webber.

There are many types of analysis one can do for the randomized SVD. For instance, letting B be a matrix and X the rank-k output of the randomized SVD, natural questions include:

  • What is the expected error \mathbb{E} \norm{X-B} measured in some norm \norm{\cdot}? What about expected squared error \mathbb{E} \left\|X-B\right\|^2? How do these answers change with different norms?
  • How large can the randomized SVD error \norm{X - B} get except for some small failure probability \delta?
  • How close is the randomized SVD truncated to rank \rho compared to the best rank-\rho approximation to B?
  • How close are the singular values and vectors of the randomized SVD approximation X compared to those of B?

Implicitly and explicitly, the analysis of Tropp and Webber provides satisfactory answers to a number of these questions. In the interest of simplifying the presentation, we shall focus our presentation on just one of these questions, proving following result:

    \[\mathbb{E} \left\|B-X\right\|_{\rm F}^2 \le \left( 1 + \frac{k}{k-(r+1)}\right) \left\|B - \lowrank{B}_r \right\|_{\rm F}^2 \quad \text{for every $r \le k-2$}.\]

Here, \left\|\cdot\right\|_{\rm F} is the Frobenius norm. We encourage the interested reader to check out Tropp and Webber’s paper to see the methodology we summarize here used to prove numerous facts about the randomized SVD and its extensions using subspace iteration and block Krylov iteration.

Projection Formula

Let us recall the randomized SVD as we presented it in last post:

  1. Collect information. Form Y = B\Omega where \Omega is an appropriately chosen n\times k random matrix.
  2. Orthogonalize. Compute an orthonormal basis Q for the column space of Y by, e.g., thin QR factorization.
  3. Project. Form C \coloneqq Q^*B, where {}^* denotes the conjugate transpose.

The randomized SVD is the approximation

    \[B\approx X \coloneqq QC.\]

It is easy to upgrade X = QC into a compact SVD form X = U\Sigma V^*, as we did as steps 4 and 5 in the previous post.

For the analysis of the randomized SVD, it is helpful to notice that the approximation X takes the form

    \[X = QC = QQ^*B = \Pi_{B\Omega} B.\]

Here, \Pi_{B\Omega} denotes the orthogonal projection onto the column space of the matrix B\Omega. We call X = \Pi_{B\Omega} B the projection formula for the randomized SVD.1This formula bares a good deal of similarity to the projection formula for the Nyström approximation. This is not a coincidence.

Aside: Quadratic Unitarily Invariant Norms

To state our error bounds in the most general language, we can adopt the language of quadratic unitarily invariant norms. Recall that a square matrix U is unitary if

    \[U^*U = UU^* = I.\]

You may recall from my post on Nyström approximation that a norm \left\|\cdot\right\|_{\rm UI} on matrices is unitarily invariant if

    \[\left\|UBV\right\|_{\rm UI} = \left\|B\right\|_{\rm UI} \quad \text{for all unitary matrices $U$, $V$ and any matrix $B$}.\]

A unitarily invariant norm \left\|\cdot\right\|_{\rm Q} is said to be quadratic if there exists another unitarily invariant norm \left\|\cdot\right\|_{\rm UI} such that

(1)   \[\left\|B\right\|_{\rm Q}^2 = \left\|B^*B\right\|_{\rm UI} = \left\|BB^*\right\|_{\rm UI} \quad \text{for every matrix $B$.} \]

Many examples of quadratic unitarily invariant norms are found among the Schatten p-norms, defined as

    \[\left\|B\right\|_{S_p}^p \coloneqq \sum_i \sigma_i(B)^p.\]

Here, \sigma_1(B) \ge \sigma_2(B) \ge \cdots denote the decreasingly order singular values of B. The Schatten 2-norm \left\|\cdot\right\|_{S_2} is the Frobenius norm of a matrix. The spectral norm (i.e., operator 2-norm) is the Schatten \infty-norm, defined to be

    \[\norm{B} = \left\|B\right\|_{S_\infty} \coloneqq \max_i \sigma_i(B) = \sigma_1(B).\]

The Schatten p-norms are unitarily invariant norms for every 1\le p \le \infty. However, the Schatten p-norms are quadratic unitarily invariant norms only for 2\le p \le \infty since

    \[\left\|B\right\|_{S_p}^2 = \left\|B^*B\right\|_{S_{p/2}}.\]

For the remainder of this post, we let \left\|\cdot\right\|_{\rm Q} and \left\|\cdot\right\|_{\rm UI} be a quadratic unitarily invariant norm pair satisfying (1).

Error Decomposition

The starting point of our analysis is the following decomposition of the error of the randomized SVD:

(2)   \[\left\|B - X\right\|_{\rm Q}^2 \le \left\|B - \lowrank{B\right\|_r}_{\rm Q}^2 + \left\|\lowrank{B\right\|_r - \Pi_{B\Omega} \lowrank{B}_r}_{\rm Q}^2. \]

Recall that we have defined \lowrank{B}_r to be an optimal rank-r approximation to B:

    \[\left\|B - \lowrank{B\right\|_r}_{\rm Q} \le \left\|B - C\right\|_{\rm Q} \quad \text{for any rank-$r$ matrix $C$}.\]

Thus, the error decomposition says that the excess error is bounded by

    \[\left\|B - X\right\|_{\rm Q}^2 - \left\|B - \lowrank{B\right\|_r}_{\rm Q}^2 \le \left\|\lowrank{B\right\|_r - \Pi_{B\Omega} \lowrank{B}_r}_{\rm Q}^2.\]

Using the projection formula, we can prove the error decomposition (2) directly:

    \begin{align*}\left\|B - X\right\|_{\rm Q}^2 &= \left\|B - \Pi_{B\Omega} B \right\|_{\rm Q}^2\\&= \norm{(I-\Pi_{B\Omega})BB^*(I-\Pi_{B\Omega})}_{\rm UI}  \\&= \norm{(I-\Pi_{B\Omega})(BB^* - \lowrank{B}_r\lowrank{B}_r^*)(I-\Pi_{B\Omega})}_{\rm UI} \\&\qquad + \norm{(I-\Pi_{B\Omega})\lowrank{B}_r\lowrank{B}_r^*(I-\Pi_{B\Omega})}_{\rm UI}\\&\le \left\|BB^* - \lowrank{B}_r\lowrank{B}_r^*\right|_{\rm UI} + \left\|\lowrank{B}_r - \Pi_{B\Omega} \lowrank{B}_r\right\|_{\rm Q}^2 \\&\le \left\|B - \lowrank{B}_r \right\|_{\rm Q}^2 + \left\|\lowrank{B}_r- \Pi_{B\Omega} \lowrank{B}_r\right\|_{\rm Q}^2.\end{align*}

The first line is the projection formula, the second is relation between \norm{\cdot}_{\rm Q} and \norm{\cdot}_{\rm UI}, and the third is the triangle inequality. For the fifth line, we use the fact that I - \Pi_{B\Omega} is an orthoprojector and thus has unit spectral norm \norm{I - \Pi_{B\Omega}} together with the fact that

    \[\left\|BCD\right\|_{\rm UI} \le \norm{B} \cdot \left\|C\right\|_{\rm UI}\cdot \norm{D} \quad \text{for all matrices $B,C,D$.}\]

For the final inequality, we used commutation rule

    \[(B - \lowrank{B}_r)(B - \lowrank{B}_r)^* = BB^* - \lowrank{B}_r\lowrank{B}_r^*.\]

Bounding the Error

In this section, we shall continue to refine the error decomposition (2). Consider a thin SVD of B, partitioned as

    \[B = \onebytwo{U_1}{U_2} \twobytwo{\Sigma_1}{0}{0}{\Sigma_2}\onebytwo{V_1}{V_2}^*\]

where

  • \onebytwo{U_1}{U_2} and \onebytwo{V_1}{V_2} both have orthonormal columns,
  • \Sigma_1 and \Sigma_2 are square diagonal matrices with nonnegative entries,
  • U_1 and V_1 have r columns, and
  • \Sigma_1 is an r\times r matrix.

Under this notation, the best rank-r approximation is \lowrank{B}_r = U_1\Sigma_1V_1^*. Define

    \[\twobyone{\Omega_1}{\Omega_2} \coloneqq \twobyone{V_1^*}{V_2^*} \Omega.\]

We assume throughout that \Omega_1 is full-rank (i.e., \rank \Omega_1 = r).

Applying the error decomposition (2), we get

(3)   \begin{align*}\left\|B - X\right\|_{\rm Q}^2&\le \norm{ \onebytwo{U_1}{U_2} \twobytwo{0}{0}{0}{\Sigma_2}\onebytwo{V_1}{V_2}^*}_{\rm Q}^2 + \norm{(I-\Pi_{B\Omega})U_1\Sigma_1V_1^*}_{\rm Q}^2 \\&= \left\| \Sigma_2 \right\|_{\rm Q}^2 + \norm{(I-\Pi_{B\Omega})U_1\Sigma_1}_{\rm Q}^2. \end{align*}

For the second line, we used the unitary invariance of \left\|\cdot\right\|_{\rm Q}. Observe that the column space of B\Omega is a subset of the column space of B\Omega\Omega_1^\dagger so

(4)   \[\norm{(I-\Pi_{B\Omega})U_1\Sigma_1}_{\rm Q}^2 \le \norm{(I-\Pi_{B\Omega\Omega_1^\dagger})U_1\Sigma_1}_{\rm Q}^2 = \norm{\Sigma_1U_1^*(I-\Pi_{B\Omega\Omega_1^\dagger})U_1\Sigma_1}_{\rm UI}. \]

Let’s work on simplifying the expression

    \[\Sigma_1U_1^*(I-\Pi_{B\Omega\Omega_1^\dagger})U_1\Sigma_1.\]

First, observe that

    \[B\Omega\Omega_1^\dagger = B\onebytwo{V_1}{V_2}\twobyone{\Omega_1}{\Omega_2}\Omega_1^\dagger = \onebytwo{U_1}{U_2} \twobytwo{\Sigma_1}{0}{0}{\Sigma_2} \twobyone{I}{\Omega_2\Omega_1^\dagger} = U_1\Sigma_1 + U_2\Sigma_2\Omega_2\Omega_1^\dagger.\]

Thus, the projector \Pi_{B\Omega\Omega_1^\dagger} takes the form

    \begin{align*}\Pi_{B\Omega\Omega_1^\dagger} &= (B\Omega\Omega_1^\dagger) \left[(B\Omega\Omega_1^\dagger)^*(B\Omega\Omega_1^\dagger)\right]^\dagger (B\Omega\Omega_1^\dagger)^* \\&= (U_1\Sigma_1 + U_2\Sigma_2\Omega_2\Omega_1^\dagger) \left[\Sigma_1^2 + (\Sigma_2\Omega_2\Omega_1^\dagger)^*(\Sigma_2\Omega_2\Omega_1^\dagger)\right]^\dagger (U_1\Sigma_1 + U_2\Sigma_2\Omega_2\Omega_1^\dagger)^*.\end{align*}

Here, {}^\dagger denotes the Moore–Penrose pseudoinverse, which reduces to the ordinary matrix inverse for a square nonsingular matrix. Thus,

(5)   \[\Sigma_1U_1^*(I-\Pi_{B\Omega\Omega_1^\dagger})U_1\Sigma_1 = \Sigma_1^2 - \Sigma_1^2 \left[\Sigma_1^2 + (\Sigma_2\Omega_2\Omega_1^\dagger)^*(\Sigma_2\Omega_2\Omega_1^\dagger)\right]^\dagger\Sigma_1^2. \]

Remarkably, this seemingly convoluted combination of matrices actually is a well-studied operation in matrix theory, the parallel sum. The parallel sum of positive semidefinite matrices A and H is defined as

(6)   \[A : H \coloneqq A - A(A+H)^\dagger A. \]

We will have much more to say about the parallel sum soon. For now, we use this notation to rewrite (5) as

    \[\Sigma_1U_1^*(I-\Pi_{B\Omega\Omega_1^\dagger})U_1\Sigma_1 = \Sigma_1^2 : [(\Sigma_2\Omega_2\Omega_1^\dagger)^*(\Sigma_2\Omega_2\Omega_1^\dagger)].\]

Plugging this back into (4) and then (3), we obtain the error bound

(7)   \[\left\|B - X\right\|_{\rm Q}^2\le \left\| \Sigma_2 \right\|_{\rm Q}^2 + \left\|\Sigma_1^2 : [(\Sigma_2\Omega_2\Omega_1^\dagger)^*(\Sigma_2\Omega_2\Omega_1^\dagger)]\right\|_{\rm UI}. \]

Simplifying this expression further will require more knowledge about parallel sums, which we shall discuss now.

Aside: Parallel Sums

Let us put aside the randomized SVD for a moment and digress to the seemingly unrelated topic of electrical circuits. Suppose we have a battery of voltage v and we connect the ends using a wire of resistance a. The current \mathrm{curr}_a is given by Ohm’s law

    \[v = a \cdot \mathrm{curr}_a.\]

Similarly, if the wire is replaced by a different wire with resistance h, the current \mathrm{curr}_h is then

    \[v = h \cdot \mathrm{curr}_h.\]

Now comes the interesting question. What if we connect the battery by both wires (resistances a and h) simultaneously in parallel, so that current can flow through either wire? Since wires of resistances a and h still have voltage v, Ohm’s law still applies and thus the total current is

    \[\mathrm{curr}_{\rm total} = \mathrm{curr}_a + \mathrm{curr}_h = \frac{v}{a} + \frac{v}{h} = v \left(a^{-1}+h^{-1}\right).\]

The net effect of connecting resisting wires of resistances a and h is the same as a single resisting wire of resistance

(8)   \[a:h \coloneqq \frac{v}{\mathrm{curr}_{\rm total}} = \left( a^{-1}+h^{-1}\right)^{-1}. \]

We call the the operation a \mathbin{:} h the parallel sum of a and h because it describes how resistances combine when connected in parallel.

The parallel sum operation : can be extended to all nonnegative numbers a and h by continuity:

    \[a:h \coloneqq \lim_{b\downarrow a, \: k\downarrow h} b:k = \begin{cases}\left( a^{-1}+h^{-1}\right)^{-1}, & a,h > 0 ,\\0, & \textrm{otherwise}.\end{cases}\]

Electrically, this definition states that one obtains a short circuit (resistance 0) if either of the wires carries zero resistance.

Parallel sums of matrices were introduced by William N. Anderson, Jr. and Richard J. Duffin as a generalization of the parallel sum operation from electrical circuits. There are several equivalent definitions. The natural definition (8) applies for positive definite matrices

    \[A:H = (A^{-1}+H^{-1})^{-1} \quad \text{for $A,H$ positive definite.}\]

This definition can be extended to positive semidefinite matrices by continuity. It can be useful to have an explicit definition which applies even to positive semidefinite matrices. To do so, observe that the (scalar) parallel sum satisfies the identity

    \[a:h = \frac{1}{a^{-1}+h^{-1}} = \frac{ah}{a+h} = \frac{a(a+h)-a^2}{a+h} = a - a(a+h)^{-1}a.\]

To extend to matrices, we capitalize the letters and use the Moore–Penrose pseudoinverse in place of the inverse:

    \[A : H = A - A(A+H)^\dagger A.\]

This was the definition (6) of the parallel sum we gave above which we found naturally in our randomized SVD error bounds.

The parallel sum enjoys a number of beautiful properties, some of which we list here:

  1. Symmetry. A:H = H:A,
  2. Simplified formula. A:H = (A^{-1}+H^{-1})^{-1} for positive definite matrices,
  3. Bounds. A:H \preceq A and A:H \preceq H.
  4. Monotone. A\mapsto (A:H) is monotone in the Loewner order.
  5. Concavity. The map (A,H) \mapsto A:H is (jointly) concave (with respect to the Loewner order).
  6. Conjugation. For any square C, C^*(A:H)C = (C^*AC):(C^*HC).
  7. Trace. \tr(A:H) \le (\tr A):(\tr H).2In fact, this property holds with the trace replaced by any positive linear map. That is, if \Phi is a linear map from square matrices to square matrices satisfying \Phi(A) \succeq 0 if A\succeq 0, then \Phi(A:H) \preceq \Phi(A):\Phi(H).
  8. Unitarily invariant norms. \left\|A:H\right\|_{\rm UI} \le \left\|A\right\|_{\rm UI} : \left\|H\right\|_{\rm UI}.

For completionists, we include a proof of the last property for reference.

Proof of Property 8
Let \left\|\cdot\right\|_{\rm UI}' be the unitarily invariant norm dual to \left\|\cdot\right\|_{\rm UI}.3Dual with respect to the Frobenius inner product, that is. By duality, we have

    \begin{align*}\left\|A:H\right\|_{\rm UI} &= \max_{M\succeq 0,\: \left\|M\right\|_{\rm UI}'\le 1} \tr((A:H)M) \\&= \max_{M\succeq 0,\: \left\|M\right\|_{\rm UI}'\le 1} \tr(M^{1/2}(A:H)M^{1/2})\\&= \max_{M\succeq 0,\: \left\|M\right\|_{\rm UI}'\le 1} \tr((M^{1/2}AM^{1/2}):(M^{1/2}HM^{1/2}))  \\&\le \max_{M\succeq 0,\: \left\|M\right\|_{\rm UI}'\le 1} \left[\tr(M^{1/2}AM^{1/2}):\tr(M^{1/2}HM^{1/2}) \right] \\&\le \left[\max_{M\succeq 0,\: \left\|M\right\|_{\rm UI}'\le 1} \tr(M^{1/2}AM^{1/2})\right]:\left[\max_{M\succeq 0,\: \left\|M\right\|_{\rm UI}'\le 1} \tr(M^{1/2}HM^{1/2})\right]\\&= \left\|A\right\|_{\rm UI} : \left\|H\right\|_{\rm UI}.\end{align*}


The first line is duality, the second holds because M\succeq 0, the third line is property 6, the fourth line is property 7, the fifth line is property 4, and the sixth line is duality.

Nearing the Finish: Enter the Randomness

Equipped with knowledge about parallel sums, we are now prepared to return to bounding the error of the randomized SVD. Apply property 8 of the parallel sum to the randomized SVD bound (7) to obtain

    \begin{align*}\left\|B - X\right\|_{\rm Q}^2&\le \left\| \Sigma_2 \right\|_{\rm Q}^2 + \left\|\Sigma_1^2 : [(\Sigma_2\Omega_2\Omega_1^\dagger)^*(\Sigma_2\Omega_2\Omega_1^\dagger)]\right\|_{\rm UI} \\&\le \left\| \Sigma_2 \right\|_{\rm Q}^2 + \left\|\Sigma_1^2\right\|_{\rm UI} : \left\|[(\Sigma_2\Omega_2\Omega_1^\dagger)^*(\Sigma_2\Omega_2\Omega_1^\dagger)]\right\|_{\rm UI} \\&= \left\| \Sigma_2 \right\|_{\rm Q}^2 + \left\|\Sigma_1\right\|_{\rm Q}^2 : \left\|\Sigma_2\Omega_2\Omega_1^\dagger\right\|^2_{\rm Q}.\end{align*}

This bound is totally deterministic: It holds for any choice of test matrix \Omega, random or otherwise. We can use this bound to analyze the randomized SVD in many different ways for different kinds of random (or non-random) matrices \Omega. Tropp and Webber provide a number of examples instantiating this general bound in different ways.

We shall present only the simplest error bound for randomized SVD. We make the following assumptions:

  • The test matrix \Omega is standard Gaussian.4By which, we mean the entries of \Omega are independent and standard normal.
  • The norm \left\|\cdot\right\|_{\rm Q} is the Frobenius norm \left\|\cdot\right\|_{\rm F}.
  • The randomized SVD rank k is \ge r-2.

Under these assumptions and using the property a : h \le h, we obtain the following expression for the expected error of the randomized SVD

(9)   \[\mathbb{E} \left\|B - X\right\|_{\rm F}^2\le \left\| B - \lowrank{B\right\|_r }_{\rm F}^2 + \mathbb{E}\left\|\Sigma_2\Omega_2\Omega_1^\dagger\right\|^2_{\rm F}. \]

All that is left to is compute the expectation of \left\|\Sigma_2\Omega_2\Omega_1^\dagger\right\|_{\rm F}^2. Remarkably, this can be done in closed form.

Aside: Expectation of Gaussian–Inverse Gaussian Matrix Product

The final ingredient in our randomized SVD analysis is the following computation involving Gaussian random matrices. Let G and \Gamma be independent standard Gaussian random matrices with \Gamma of size r\times k, k\ge r-2. Then

    \[\mathbb{E} \left\|SG\Gamma^\dagger R\right\|_{\rm F}^2 = \frac{1}{k-(r+1)}\left\|S\right\|_{\rm F}^2\left\|R\right\|_{\rm F}^2.\]

For the probability lovers, we will now go on to prove this. For those who are less probabilistically inclined, this result can be treated as a black box.

Proof of Random Matrix Formula
To prove this, first consider a simpler case where \Gamma^\dagger does not appear:

    \[\mathbb{E} \left\|SGW\right\|_{\rm F}^2 = \mathbb{E} \sum_{ij} \left(\sum_{k\ell} s_{ik}g_{k\ell}w_{\ell j} \right)^2 = \mathbb{E} \sum_{ijk\ell} s_{ik}^2 w_{\ell j}^2 = \left\|S\right\|_{\rm F}^2\left\|W\right\|_{\rm F}^2.\]



Here, we used the fact that the entries g_{k\ell} are independent, mean-zero, and variance-one. Thus, applying this result conditionally on \Gamma with W = \Gamma^\dagger R, we get

(10)   \[\mathbb{E} \left\|SG\Gamma^\dagger R\right\|_{\rm F}^2 =\mathbb{E}\left[ \mathbb{E}\left[ \left\|SG\Gamma^\dagger R\right\|_{\rm F}^2 \,\middle|\, \Gamma\right]\right] =\left\|S\right\|_{\rm F}^2\cdot \mathbb{E} \left\|\Gamma^\dagger R\right\|_{\rm F}^2. \]



To compute \mathbb{E} \left\|\Gamma^\dagger R\right\|_{\rm F}^2, we rewrite using the trace

    \[\mathbb{E} \left\|\Gamma^\dagger R\right\|_{\rm F}^2 = \mathbb{E} \tr \left(\Gamma^\dagger RR^* \Gamma^{*\dagger} \right)= \tr \left(RR^* \mathbb{E}[\Gamma^{*\dagger}\Gamma^\dagger] \right) = \tr \left(RR^* \mathbb{E}[(\Gamma\Gamma^*)^{-1}] \right).\]



The matrix (\Gamma\Gamma^*)^{-1} is known as an inverse-Wishart matrix and is a well-studied random matrix model. In particular, its expectation is known to be (k-(r+1))^{-1}I. Thus, we obtain

    \[\mathbb{E} \left\|\Gamma^\dagger R\right\|_{\rm F}^2 = \tr \left(RR^* \mathbb{E}[(\Gamma\Gamma^*)^{-1}] \right) = \frac{\tr \left(RR^* \right)}{k-(r+1)} = \frac{\left\|R\right\|_{\rm F}^2}{k-(r+1)}.\]



Plugging into (10), we obtain the desired conclusion

    \[\mathbb{E} \left\|SG\Gamma^\dagger R\right\|_{\rm F}^2 =\left\|S\right\|_{\rm F}^2\cdot \mathbb{E} \left\|\Gamma^\dagger R\right\|_{\rm F}^2 = \frac{1}{k-(r+1)}\left\|S\right\|_{\rm F}^2\left\|R\right\|_{\rm F}^2.\]



This completes the proof.

Wrapping it All Up

We’re now within striking distance. All that is left is to assemble the pieces we have been working with to obtain a final bound for the randomized SVD.

Recall we have chosen the random matrix \Omega to be standard Gaussian. By the rotational invariance of the Gaussian distribution, \Omega_1 and \Omega_2 are independent and standard Gaussian as well. Plugging the matrix expectation bound to (9) then completes the analysis

    \begin{align*}\mathbb{E} \left\|B - X\right\|_{\rm F}^2&\le \left\| B - \lowrank{B\right\|_r }_{\rm F}^2 + \mathbb{E}\norm{\Sigma_2\Omega_2\Omega_1^\dagger I_{k\times k}}^2_{\rm F} \\&= \left\| B - \lowrank{B\right\|_r }_{\rm F}^2 + \frac{\left\|\Sigma_2\right\|_{\rm F}^2\norm{I_{k\times k}}_{\rm F}^2}{k-(r+1)} \\&= \left(1 + \frac{k}{k-(r+1)}\right) \left\| B - \lowrank{B\right\|_r }_{\rm F}^2.\end{align*}

Low-Rank Approximation Toolbox: Randomized SVD

The randomized SVD and its relatives are workhorse algorithms for low-rank approximation. In this post, I hope to provide a fairly brief introduction to this useful method. In the following post, we’ll look into its analysis.

The randomized SVD is dead-simple to state: To approximate an m\times n matrix B by a rank-k matrix, perform the following steps:

  1. Collect information. Form Y = B\Omega where \Omega is an appropriately chosen n\times k random matrix.
  2. Orthogonalize. Compute an orthonormal basis Q for the column space of Y by, e.g., thin QR factorization.
  3. Project. Form C \coloneqq Q^*B. (Here, {}^* denotes the conjugate transpose of a complex matrix, which reduces to the ordinary transpose if the matrix is real.)

If all one cares about is a low-rank approximation to the matrix B, one can stop the randomized SVD here, having obtained the approximation

    \[B \approx X \coloneqq Q C.\]

As the name “randomized SVD” suggests, one can easily “upgrade” the approximation X = QC to a compact SVD format:

  1. SVD. Compute a compact SVD of the matrix C: C = \hat{U}\Sigma V^* where \hat{U} and \Sigma and k\times k and V is n\times k.
  2. Clean up. Set U \coloneqq Q\hat{U}.

We now have approximated B by a rank-k matrix X in compact SVD form:

    \[B \approx X = QC = U\Sigma V^*.\]

One can use the factors U, \Sigma, and V of the approximation X \approx B as estimates of the singular vectors and values of the matrix B.

What Can You Do with the Randomized SVD?

The randomized SVD has many uses. Pretty much anywhere you would ordinarily use an SVD is a candidate for the randomized SVD. Applications include model reduction, fast direct solvers, computational biology, uncertainty quantification, among numerous others.

To speak in broad generalities, there are two ways to use the randomized SVD:

  • As an approximation. First, we could use X as a compressed approximation to B. Using the format X = QC, X requires just (m+n)k numbers of storage, whereas B requires a much larger mn numbers of storage. As I detailed at length in my blog post on low-rank matrices, many operations are cheap for the low-rank matrix X. For instance, we can compute a matrix–vector product Xz in roughly (m+n)k operations rather than the mn operations to compute Bz. For these use cases, we don’t need the “SVD” part of the randomized SVD.
  • As an approximate SVD. Second, we can actually use the U, \Sigma, and V produced by the randomized SVD as approximations to the singular vectors and values of B. In these applications, we should be careful. Just because X is a good approximation to B, it is not necessarily the case that U, \Sigma, and V are close to the singular vectors and values of B. To use the randomized SVD in this context, it is safest to use posterior diagnostics (such as the ones developed in this paper of mine) to ensure that the singular values/vectors of interest are computed to a high enough accuracy. A useful rule of thumb is the smallest two to five singular values and vectors computed by the randomized SVD are suspect and should be used in applications only with extreme caution. When the appropriate care is taken and for certain problems, the randomized SVD can provide accurate singular vectors far faster than direct methods.

Accuracy of the Randomized SVD

How accurate is the approximation X produced by the randomized SVD? No rank-k approximation can be more accurate than the best rank-k approximation \lowrank{B}_k to the matrix B, furnished by an exact SVD of B truncated to rank k. Thus, we’re interested in how much larger B - X is than B - \lowrank{B}_k.

Frobenius Norm Error

A representative result describing the error of the randomized SVD is the following:

(1)   \[\mathbb{E} \left\|B - X\right\|_{\rm F}^2 \le \min_{r \le k-2} \left( 1 + \frac{r}{k-(r+1)} \right) \left\|B - \lowrank{B}_r\right\|^2_{\rm F}. \]

This result states that the average squared Frobenius-norm error of the randomized SVD is comparable with the best rank-r approximation error for any r\le k-2. In particular, choosing k = r+2, we see that the randomized SVD error is at most (1+r) times the best rank-r approximation

(2)   \[\mathbb{E} \left\|B - X\right\|_{\rm F}^2 \le (1+r) \left\|B - \lowrank{B}_r \right\|^2_{\rm F} \quad \text{for $k=r+2$}. \]

Choosing k = 2r+1, we see that the randomized SVD has at most twice the error of the best rank-r approximation

(3)   \[\mathbb{E} \left\|B - X\right\|_{\rm F}^2 \le 2 \left\|B - \lowrank{B}_r\right\|^2_{\rm F} \quad \text{for $k=2r+1$}. \]

In effect, these results tell us that if we want an approximation that is nearly as good as the best rank-r approximation, using an approximation rank of, say, k = r+2 or k = 2r for the randomized SVD suffices. These results hold even for worst-case matrices. For nice matrices with steadily decaying singular values, the randomized SVD can perform even better than equations (2)–(3) would suggest.

Spectral Norm Error

The spectral norm is often a better error metric for matrix computations than the Frobenius norm. Is the randomized SVD also comparable with the best rank-k approximation when we use the spectral norm? In this setting, a representative error bound is

(4)   \[\mathbb{E} \left\|B - X\right\|^2 \le \min_{r \le k-2} \left( 1 + \frac{2r}{k-(r+1)} \right) \left(\left\|B - \lowrank{B}_r \right\|^2 + \frac{\mathrm{e}^2}{k-r} \left\|B - \lowrank{B}_r\right\|^2_{\rm F} \right). \]

The spectral norm error of the randomized SVD depends on the Frobenius norm error of the best rank-r approximation for r \le k-2.

Recall that the spectral and Frobenius norm can be defined in terms of the singular values of the matrix B:

    \[\norm{B} = \sigma_1(B),\quad \left\|B\right\|_{\rm F}^2 = \sum_i \sigma_i(B)^2.\]

Rewriting (4) using these formulas, we get

    \[\mathbb{E} \left\|B - X\right\|^2 \le \min_{r \le k-2} \left( 1 + \frac{2r}{k-(r+1)} \right) \left(\sigma_{r+1}^2 + \frac{\mathrm{e}^2}{k-r} \sum_{i>r} \sigma_i^2 \right).\]

Despite being interested in the largest singular value of the error matrix B-X, this bound demonstrates that the randomized SVD incurs errors based on the entire tail of B‘s singular values \sum_{i>r}\sigma_i^2. The randomized SVD is much worse than the best rank-k approximation for a matrix B with a long detail of slowly declining singular values.

Improving the Approximation: Subspace Iteration and Block Krylov Iteration

We saw that the randomized SVD produces nearly optimal low-rank approximations when we measure using the Frobenius norm. When we measure using the spectral norm, we have a split decision: If the singular values decay rapidly, the randomized SVD is near-optimal; if the singular values decay more slowly, the randomized SVD can suffer from large errors.

Fortunately, there are two fixes to improve the randomized SVD in the presence of slowly decaying singular values:

  • Subspace iteration: To improve the quality of the randomized SVD, we can combine it with the famous power method for eigenvalue computations. Instead of Y = B\Omega, we use the powered expression Y = B(B^*B)^q\Omega. Powering has the effect of amplifying the large singular values of B and dimishing the influence of the tail.
  • Block Krylov iteration: Subspace iteration is effective, but fairly wasteful. To compute B(B^*B)^q\Omega we compute the block Krylov sequence

        \[B\Omega, BB^*B\Omega, BB^*BB^*B\Omega,\ldots,B(B^*B)^q\Omega\]

    and throw away all but the last term. To make more economical use of resources, we can use the entire sequence as our information matrix Y:

        \[Y = \begin{bmatrix} B\Omega & BB^*B\Omega & BB^*BB^*B\Omega& \cdots & B(B^*B)^q\Omega\end{bmatrix}.\]

    Storing the entire block Krylov sequence is more memory-intensive but makes more efficient use of matrix products than subspace iteration.

Both subspace iteration and block Krylov iteration should be carefully implemented to produce accurate results.

Both subspace iteration and block Krylov iteration diminish the effect of a slowly decaying tail of singular values on the accuracy of the randomized SVD. The more slowly the tail decays, the larger one must take the number of iterations q to obtain accurate results.