Note to Self: Hanson–Wright and Trace Estimation with Random Vectors on the Sphere

Tyler Chen has just released an excellent monograph on the Lanczos method, one of the most powerful algorithms for extracting information from a symmetric matrix A. I would definitely recommend checking it out.

One of the subjects touched on by this book is stochastic trace estimation, a topic I’ve written about several times before on this blog. In his book, Tyler uses stochastic trace estimates based on random vectors on the sphere (a great choice), and he provides a simple, but often pessimistic, mathematical analysis of this approach. To complement Tyler’s amazing book, I felt it was a good opportunity to share a sharper analysis that I came up with using a more sophisticated argument. I would not be surprised if this argument appears elsewhere in the literature, but I am not aware of a source.

Let x,x_1,\ldots,x_m be random vectors in \real^n drawn uniformly at random from the sphere of radius \sqrt{n},1Informally, we define the uniform distribution on the sphere to be such that the probability that x belongs to a set U is the ratio of the (hyper-)surface area of U to the (hyper-)surface area of the full sphere. Formally, the uniform distribution on the sphere can be defined multiple ways. First, treating the sphere as a subset of \real^n, we can define the uniform distribution as the (n-1)-dimensional Hausdorff measure, normalized so that the sphere has unit measure. Second, observe that the sphere is acted upon in a natural way by the group of orthogonal matrices. The uniform distribution is the unique probability measure which is invariant to this group action. let A be a real symmetric matrix, and define the quadratic form

    \[q \coloneqq x^\top Ax\]

and the Girard–Hutchinson trace estimator

    \[\hat{\tr}\coloneqq \frac{1}{m}\sum_{i=1}^m x_i^\top Ax_i.\]

Both q and \hat{\tr} are unbiased estimates for the trace of A, \expect[q] = \expect[\hat{\tr}] = \tr A. The goal of this post will be to bound the probability of these quantities being much smaller or larger than the trace of A.

Hanson–Wright on the Sphere

Observe that \hat{\tr} is an average of m independent copies of the random variable q. Therefore, we will begin our analysis with the quadratic form q and discuss the Girard–Hutchinson estimate \hat{\tr} at the end of the post.

Centering

Begin by introducing the centered matrix

    \[\overline{A} = A - \frac{\tr A}{n}I.\]

The transformation A\mapsto\overline{A} has the effect of shifting A’s eigenvalues to have mean zero. Consequently, since the trace is the sum of the eigenvalues, \overline{A} has trace zero.

Now, rewrite q in terms of the centered matrix \overline{A}:

    \[q = x^\top A x = x^\top \overline{A} x + \frac{\tr A}{n} \cdot x^\top I x = x^\top \overline{A} x + \tr A.\]

In the final equality, we use the fact that x is on the sphere of radius \sqrt{n} so that x^\top I x = \norm{x}^2 = n. Rearranging, we see that the error \overline{q}\coloneqq q-\tr A satisfies

    \[\overline{q} = q-\tr A = x^\top \overline{A}x.\]

We have shown that the error \overline{q} depends only on the centered matrix \overline{A} rather than the original matrix A. This observation is at the root of the reason that quadratic forms with vectors on the sphere have smaller tail probabilities than other distributions like the Gaussian distribution. Indeed, \overline{A} is smaller than A when measured using the Frobenius norm \norm{\overline A}_{\rm F} \le \norm{A}_{\rm F}, and the spectral norm is never much larger \norm{\overline{A}} \le 2\norm{A}. These observations will be important later.

Laplace Transform Method and Comparison to Standard Gaussian Vectors

To derive exponential concentration inequalities, we make use of the Laplace transform method. For a random variable z, define the cumulant generating function (cgf)

    \[\xi_z(\theta) = \log (\expect [\exp(\theta z)]).\]

Bounds on the cgf immediately yield bounds on the random variable taking large values, so it will behoove us to estimate this quantity for \overline{q}.

To estimate the cgf of \overline{q}, we will make use of a comparison between Gaussian vectors and random vectors on the sphere. Recall that a standard Gaussian vector g is a vector with independent standard Gaussian entries. Standard Gaussian random vectors are spherically symmetric, meaning they can be written in the form g = a\cdot x, where x is a uniform random vector on the sphere and a is a scaling factor.2Concretely, a\sqrt{n} is a \chi random variable with n degrees of freedom. The scaling factor a is independent of x and, in this case, satisfies \expect[a^2] = 1.3Indeed, the expected squared length of a standard Gaussian random vector is \expect[\norm{g}^2] = \sum_{i=1}^n \expect[g_i^2] = n. But also, \expect[\norm{g}^2] = \expect[a^2] \cdot \expect[\norm{x}^2] = \expect[a^2] \cdot n. Therefore, \expect[a^2]=1.

Using this relationship, the error \overline{q} can be connected to the standard Gaussian random vector g as follows

    \[g^\top \overline{A} g = a^2 \cdot x^\top \overline{A} x = a^2 \cdot \overline{q}.\]

Intuitively, we might expect that multiplying the random error \overline{q} by a scaling factor of mean one should only have the effect of increasing the variability of \overline{q} and thus increasing its probability of taking large values. Based on this intuition, we might conjecture that the cgfs are related

    \[\xi_{\overline{q}}(\theta) \le \xi_{g^\top A g}(\theta) \quad \text{for all } \theta \in \real.\]

Indeed, this conjecture is true.

Proposition (Scaling hurts). Let z be a random variable and let b be a random variable with expectation 1, independent of z. Then

    \[\xi_{z}(\theta) \le \xi_{bz}(\theta) \quad \text{for all } \theta \in \real.\]

To prove this result, let \expect_b and \expect_z denote expectations take over the randomness in b and z separately. The total expectation is \expect[\cdot] = \expect_z[\expect_b[\cdot ]]. Begin with the right-hand side and invoke Jensen’s inequality over b:

    \[\xi_{bz}(\theta) = \xi_z(\theta) = \log (\expect_z [\expect_b [\exp(\theta bz)]]) \ge \log (\expect_z [\exp(\theta \expect_b[b]z)]) = \xi_z(\theta).\]

In the last line, we use the hypothesis \expect[b] = 1 and the definition of the cgf.

Having established this proposition, we conclude that \xi_{\overline{q}}(\theta) \le \xi_{g^\top \overline{A} g}(\theta) for all \theta.

Completing the Analysis

Finally, invoking a standard bound for the cgf of g^\top \overline{A}g for a trace-zero matrix \overline{A}, we obtain

    \[\xi_{\overline{q}}(\theta) \le \xi_{g^\top \overline{A} g}(\theta) \le \frac{\theta^2 \norm{\overline{A}}_{\rm F}^2}{1 - 2\theta \norm{\overline{A}}}.\]

A cgf bound of this form immediately yields the following tail bound (see BoucheronLugosi, and Massart’s Concentration Inequalities page 29 and Exercise 2.8):

    \[\prob \{ x^\top A x - \tr(A) \ge t \} \le \exp \left( - \frac{t^2/2}{2 \norm{\overline{A}}_{\rm F}^2 + 2 \overline{\norm{\overline{A}}t}} \right).\]

One can also control the lower tail event x^\top A x - \tr(A) \le -t by instantiating this result with -A. Union bounding over the upper and lower tails gives the symmetric bound

    \[\prob \{ |x^\top A x - \tr(A)| \ge t \} \le 2\exp \left( - \frac{t^2/2}{2 \norm{\overline{A}}_{\rm F}^2 + 2 \overline{\norm{\overline{A}}t}} \right).\]

Is This Bound Good?

Tail bounds for quadratic forms such as x^\top A x or g^\top A g are known as Hanson–Wright inequalities. For vectors on the sphere, our bound shows the tail probabilities decay like \exp(-O(t^2/\norm{\overline{A}_{\rm F}}^2)) for small t > 0 and \exp(-O(t/\norm{\overline{A}})) for large t>0. This pattern of results “subgaussian scaling for small t, subexponential scaling for large t” is typical for Hanson–Wright inequalities.

The Hanson–Wright inequalities for vectors on the sphere exhibit faster decay rate than for standard Gaussian vectors. Indeed, a standard Gaussian random vector g obeys a Hanson–Wright inequality of the form

    \[\prob \{ g^\top A g - \tr A \ge t \} \le \exp \left( -\frac{t^2/2}{C\norm{A}_{\rm F}^2 + c t \norm{A}}\right)\]

for positive constants C,c > 0. For vectors on the sphere, \norm{A}_{\rm F}^2 and \norm{A} have been replaced by \norm{\overline{A}}_{\rm F}^2 and \norm{\overline{A}} which are always smaller (and sometimes much smaller). The smaller tail probabilities for x^\top A x versus g^\top A g is a real phenomenon, not an artifact of the mathematical analysis.

As another way to evaluate the quality of these bounds, recall that a Gaussian random variable with mean zero and variance v has tail probabilities roughly of size \exp(-t^2/2v). For small t, our Hanson–Wright inequality for vectors on the sphere has the same form with v = 2\norm{\overline{A}}_{\rm F}^2. The true variance of q = x^\top A x is

    \[\Var(q) = \frac{n}{n+2} \cdot 2\norm{\overline{A}}_{\rm F}^2,\]

which is close to v = 2\norm{\overline{A}}_{\rm F}^2 for large n. (See this post of mine for a derivation of the variance formula.) Thus, even up to the constants, the Hanson–Wright inequality we’ve derived for random vectors on the sphere is nearly as good as one could possibly hope (at least for small t).

Girard–Hutchinson Estimator on the Sphere

Using our results for one quadratic form, tail bounds for the Girard–Hutchinson estimator \overline{\tr} = m^{-1} \sum_{i=1}^m x_i^\top A x_i easily follow. Indeed, the cgf is additive for independent random variables and satisfies the identity \xi_{cz}(\theta) = \xi_z(c\theta) for constant $c, so

    \[\xi_{\overline{\tr}}(\theta) = \sum_{i=1}^m \xi_{x_i^\top Ax_i}(\theta/m) \le m \cdot \frac{(\theta/m)^2 \norm{\overline{A}}_{\rm F}^2}{1 - 2(\theta/m)\norm{\overline{A}}}.\]

This cgf bound leads to tail bounds

    \begin{align*}\prob \{ \overline{\tr} - \tr(A) \ge t \} &\le \exp \left( - \frac{t^2/2}{2m \norm{\overline{A}}_{\rm F}^2 + 2m \overline{\norm{\overline{A}}t}} \right), \\\prob \{ |\overline{\tr} - \tr(A)| \ge t \} &\le 2\exp \left( - \frac{t^2/2}{2m \norm{\overline{A}}_{\rm F}^2 + 2m \overline{\norm{\overline{A}}t}} \right).\end{align*}

Don’t Use Gaussians in Stochastic Trace Estimation

Suppose we are interested in estimating the trace \tr(A) = \sum_{i=1}^n A_{ii} of an n\times n matrix A that can be only accessed through matrix–vector products Ax_1,\ldots,Ax_m. The classical method for this purpose is the GirardHutchinson estimator

    \[\hat{\tr} = \frac{1}{m} \left( x_1^\top Ax_1 + \cdots + x_m^\top Ax_m \right),\]

where the vectors x_1,\ldots,x_m are independent, identically distributed (iid) random vectors satisfying the isotropy condition

    \[\expect[x_ix_i^\top] = I.\]

Examples of vectors satisfying this condition include

Stochastic trace estimation has a number of applications: log-determinant computations in machine learningpartition function calculations in statistical physicsgeneralized cross validation for smoothing splines, and triangle counting in large networks. Several improvements to the basic Girard–Hutchinson estimator have been developed recently. I am partial to XTrace, an improved trace estimator that I developed with my collaborators.

This post is addressed at the question:

Which distribution should be used for the test vectors x_i for stochastic trace estimation?

Since the Girard–Hutchinson estimator is unbiased \expect[\hat{\tr}] = \tr(A), the variance of \hat{\tr} is equal to the mean-square error. Thus, the lowest variance trace estimate is the most accurate. In my previous post on trace estimation, I discussed formulas for the variance \Var(\hat{\tr}) of the Girard–Hutchinson estimator with different choices of test vectors. In that post, I stated the formulas for different choices of test vectors (Gaussian, random signs, sphere) and showed how those formulas could be proven.

In this post, I will take the opportunity to editorialize on which distribution to pick. The thesis of this post is as follows:

The sphere distribution is essentially always preferable to the Gaussian distribution for trace estimation.

To explain why, let’s focus on the case when A is real and symmetric.1The same principles hold in the general case, but the variance formulas are more delicate to state. See my previous post for the formulas. Let \lambda_1,\ldots,\lambda_n be the eigenvalues of A and define the eigenvalue mean

    \[\overline{\lambda} = \frac{\lambda_1 + \cdots + \lambda_n}{n}.\]

Then the variance of the Girard–Hutchinson estimator with Gaussian vectors x_i is

    \[\Var(\hat{\tr}_{\rm Gaussian}) = \frac{1}{m} \cdot 2 \sum_{i=1}^n \lambda_i^2.\]

For vectors x_i drawn from the sphere, we have

    \[\Var(\hat{\tr}_{\rm sphere}) = \frac{1}{m} \cdot \frac{n}{n+2} \cdot 2\sum_{i=1}^n (\lambda_i - \overline{\lambda})^2.\]

The sphere distribution improves on the Gaussian distribution in two ways. First, the variance of \Var(\hat{\tr}_{\rm sphere}) is smaller than \Var(\hat{\tr}_{\rm Gaussian})by a factor of n/(n+2) < 1. This improvement is quite minor. Second, and more importantly, \Var(\hat{\tr}_{\rm Gaussian}) is proportional to the sum of A‘s squared eigenvalues whereas \Var(\hat{\tr}_{\rm sphere}) is proportional to the sum of A‘s squared eigenvalues after having been shifted to be mean-zero!

The difference between Gaussian and sphere test vectors can be large. To see this, consider a 1000\times 1000 matrix A with eigenvalues uniformly distributed between 0.9 and 1.1 with a (Haar orthgonal) random matrix of eigenvectors. For simplicity, since the variance of all Girard–Hutchinson estimates is proportional to 1/m, we take m=1. Below show the variance of Girard–Hutchinson estimator for different distributions for the test vector. We see that the sphere distribution leads to a trace estimate which has a variance 300× smaller than the Gaussian distribution. For this example, the sphere and random sign distributions are similar.

DistributionVariance (divided by \tr(A)^2)
Gaussian2.0\times 10^{-3}
Sphere6.7\times 10^{-6}
Random signs6.7\times 10^{-6}

Which Distribution Should You Use: Signs vs. Sphere

The main point of this post is to argue against using the Gaussian distribution. But which distribution should you use: Random signs? The sphere distribution? The answer, for most applications, is one of those two, but exactly which depends on the properties of the matrix A.

The variance of the Girard–Hutchinson estimator with the random signs estimator is

    \[\Var(\hat{\tr}_{\rm signs}) = 2 \sum_{i\ne j} A_{ij}^2.\]

Thus, \Var(\hat{\tr}_{\rm signs}) depends on the size of the off-diagonal entries of A; \Var(\hat{\tr}_{\rm signs}) does not depend on the diagonal of A at all! For matrices with small off-diagonal entries (such as diagonally dominant matrices), the random signs distribution is often the best.

However, for other problems, the sphere distribution is preferable to random signs. The sphere distribution is rotation-invariant, so \Var(\hat{\tr}_{\rm sphere}) is independent of the eigenvectors of the (symmetric) matrix A, depending only on A‘s eigenvalues. By contrast, the variance of the Girard–Hutchinson estimator with the random signs distribution can significantly depend on the eigenvectors of the matrix A. For a given set of eigenvalues and the worst-case choice of eigenvectors, \Var(\hat{\tr}_{\rm sphere}) will always be smaller than \Var(\hat{\tr}_{\rm signs}). In fact, \Var(\hat{\tr}_{\rm sphere}) is the minimum variance distribution for Girard–Hutchinson trace estimation for a matrix with fixed eigenvalues and worst-case eigenvectors; see this section of my previous post for details.

In my experience, random signs and the sphere distribution are both perfectly adequate for trace estimation and either is a sensible default if you’re developing software. The Gaussian distribution on the other hand… don’t use it unless you have a good reason to.

How Good Can Stochastic Trace Estimates Be?

I am excited to share that our paper XTrace: Making the most of every sample in stochastic trace estimation has been published in the SIAM Journal on Matrix Analysis and Applications. (See also our paper on arXiv.)

Spurred by this exciting news, I wanted to take the opportunity to share one of my favorite results in randomized numerical linear algebra: a “speed limit” result of Meyer, Musco, Musco, and Woodruff that establishes a fundamental limitation on how accurate any trace estimation algorithm can be.

Let’s back up. Given an unknown square matrix A, the trace of A, defined to be the sum of its diagonal entries

    \[\tr(A) \coloneqq \sum_{i=1}^n A_{ii}.\]

The catch? We assume that we can only access the matrix A through matrix–vector products (affectionately known as “matvecs”): Given any vector x, we have access to Ax. Our goal is to form an estimate \hat{\tr} that is as accurate as possible while using as few matvecs as we can get away with.

To simplify things, let’s assume the matrix A is symmetric and positive (semi)definite. The classical algorithm for trace estimation is due to Girard and Hutchinson, producing a probabilistic estimate \hat{\tr} with a small average (relative) error:

    \[\expect\left[\frac{|\hat{\tr}-\tr(A)|}{\tr(A)}\right] \le \varepsilon \quad \text{using } m= \frac{\rm const}{\varepsilon^2} \text{ matvecs}.\]

If one wants high accuracy, this algorithm is expensive. To achieve just a 1% error (\varepsilon=0.01) requires roughly m=10,\!000 matvecs!

This state of affairs was greatly improved by Meyer, Musco, Musco, and Woodruff. Building upon previous work, they proposed the Hutch++ algorithm and proved it outputs an estimate \hat{\tr} satisfying the following bound:

(1)   \[\expect\left[\frac{|\hat{\tr}-\tr(A)|}{\tr(A)}\right] \le \varepsilon \quad \text{using } m= \frac{\rm const}{\varepsilon} \text{ matvecs}.\]

Now, we only require roughly m=100 matvecs to achieve 1% error! Our algorithm, XTrace, satisfies the same error guarantee (1) as Hutch++. On certain problems, XTrace can be quite a bit more accurate than Hutch++.

The MMMW Trace Estimation “Speed Limit”

Given the dramatic improvement of Hutch++ and XTrace over Girard–Hutchinson, it is natural to hope: Is there an algorithm that does even better than Hutch++ and XTrace? For instance, is there an algorithm satisfying an even slightly better error bound of the form

    \[\expect\left[\frac{|\hat{\tr}-\tr(A)|}{\tr(A)}\right] \le \varepsilon \quad \text{using } m= \frac{\rm const}{\varepsilon^{0.999}} \text{ matvecs}?\]

Unfortunately not. Hutch++ and XTrace are essentially as good as it gets.

Let’s add some fine print. Consider an algorithm for the trace estimation problem. Whenever the algorithm wants, it can present a vector x_i and receive back Ax_i. The algorithm is allowed to be adaptive: It can use the matvecs Ax_1,\ldots,Ax_s it has already collected to decide which vector x_{s+1} to present next. We measure the cost of the algorithm in terms of the number of matvecs alone, and the algorithm knows nothing about the psd matrix A other what it learns from matvecs.

One final stipulation:

Simple entries assumption. We assume that the entries of the vectors x_i presented by the algorithm are real numbers between -1 and 1 with up to b digits after the decimal place.

To get a feel for this simple entries assumption, suppose we set b=2. Then (-0.92,0.17) would be an allowed input vector, but (0.232,-0.125) would not be (too many digits after the decimal place). Similarly, (18.3,2.4) would not be valid because its entries exceed 1. The simple entries assumption is reasonable as we typically represent numbers on digital computers by storing a fixed number of digits of accuracy.1We typically represent numbers on digital computers by floating point numbers, which essentially represent numbers using scientific notation like 1.3278123 \times 10^{27}. For this analysis of trace estimation, we use fixed point numbers like 0.23218 (no powers of ten allowed)!

With all these stipulations, we are ready to state the “speed limit” for trace estimation proved by Meyer, Musco, Musco, and Woodruff:

Informal theorem (Meyer, Musco, Musco, Woodruff). Under the assumptions above, there is no trace estimation algorithm producing an estimate \hat{\tr} satisfying

    \[\expect\left[\frac{|\hat{\tr}-\tr(A)|}{\tr(A)}\right] \le \varepsilon \quad \text{using } m= \frac{\rm const}{\varepsilon^{0.999}} \text{ matvecs}.\]

We will see a slightly sharper version of the theorem below, but this statement captures the essence of the result.

Communication Complexity

To prove the MMMW theorem, we have to take a journey to the beautiful subject of communication complexity. The story is this. Alice and Bob are interested in solving a computational problem together. Alice has her input x and Bob has his input y, and they are interested in computing a function f(x,y) of both their inputs.

Unfortunately for the two of them, Alice and Bob are separated by a great distance, and can only communicate by sending single bits (0 or 1) of information over a slow network connection. Every bit of communication is costly. The field of communication complexity is dedicated to determining how efficiently Alice and Bob are able to solve problems of this form.

The Gap-Hamming problem is one example of a problem studied in communication complexity. As inputs, Alice and Bob receive vectors x,y \in \{\pm 1\}^n with +1 and -1 entries from a third party Eve. Eve promises Alice and Bob that their vectors x and y satisfy one of two conditions:

(2)   \[\text{Case 0: } x^\top y \ge\sqrt{n} \quad \text{or} \quad \text{Case 1: } x^\top y \le -\sqrt{n}. \]

Alice and Bob must work together, sending as few bits of communication as possible, to determine which case they are in.

There’s one simple solution to this problem: First, Bob sends his whole input vector y to Alice. Each entry of y takes one of the two value \pm 1 and can therefore be communicated in a single bit. Having received y, Alice computes x^\top y, determines whether they are in case 0 or case 1, and sends Bob a single bit to communicate the answer. This procedure requires n+1 bits of communication.

Can Alice and Bob still solve this problem with many fewer than n bits of communication, say \sqrt{n} bits? Unfortunately not. The following theorem of Chakrabati and Regev shows that roughly n bits of communication are needed to solve this problem:

Theorem (Chakrabati–Regev). Any algorithm which solves the Gap-Hamming problem that succeeds with at least 2/3 probability for every pair of inputs x and y (satisfying one of the conditions (2)) must take \Omega(n) bits of communication.

Here, \Omega(n) is big-Omega notation, closely related to big-O notation \order(n) and big-Theta notation \Theta(n). For the less familiar, it can be helpful to interpret \Omega(n), \order(n), and \Theta(n) as all standing for “proportional to n”. In plain language, the theorem of Chakrabati and Regev result states that there is no algorithm for the Gap-Hamming problem that much more effective than the basic algorithm where Bob sends his whole input to Alice (in the sense of requiring less than \order(n) bits of communication).

Reducing Gap-Hamming to Trace Estimation

This whole state of affairs is very sad for Alice and Bob, but what does it have to do with trace estimation? Remarkably, we can use hardness of the Gap-Hamming problem to show there’s no algorithm that fundamentally improves on Hutch++ and XTrace. The argument goes something like this:

  1. If there were a trace estimation algorithm fundamentally better than Hutch++ and XTrace, we could use it to solve Gap-Hamming in fewer than \order(n) bits of communication.
  2. But no algorithm can solve Gap-Hamming in fewer than \order(n) bits or communication.
  3. Therefore, no trace estimation algorithm is fundamentally better than Hutch++ and XTrace.

Step 2 is the work of Chakrabati and Regev, and step 3 follows logically from 1 and 2. Therefore, we are left to complete step 1 of the argument.

Protocol

Assume we have access to a really good trace estimation algorithm. We will use it to solve the Gap-Hamming problem. For simplicity, assume n is a perfect square. The basic idea is this:

  • Have Alice and Bob reshape their inputs x,y \in \{\pm 1\}^n into matrices X,Y\in\{\pm 1\}^{\sqrt{n}\times \sqrt{n}}, and consider (but do not form!) the positive semidefinite matrix

        \[A = (X+Y)^\top (X+Y).\]

  • Observe that

        \[\tr(A) = \tr(X^\top X) + 2\tr(X^\top Y) + \tr(Y^\top Y) = 2n + 2(x^\top y).\]

    Thus, the two cases in (2) can be equivalently written in terms of \tr(A):

    (2′)   \[\text{Case 0: } \tr(A)\ge 2n + 2\sqrt{n} \quad \text{or} \quad \text{Case 1: } \tr(A) \le 2n-2\sqrt{n}. \]

  • By working together, Alice and Bob can implement a trace estimation algorithm. Alice will be in charge of running the algorithm, but Alice and Bob must work together to compute matvecs. (Details below!)
  • Using the output of the trace estimation algorithm, Alice determines whether they are in case 0 or 1 (i.e., where \tr(A) \gg 2n or \tr(A) \ll 2n) and sends the result to Bob.

To complete this procedure, we just need to show how Alice and Bob can implement the matvec procedure using minimal communication. Suppose Alice and Bob want to compute Az for some vector z with entries between -1 and 1 with up to b decimal digits. First, convert z to a vector w\coloneqq 10^b z whose entries are integers between -10^b and 10^b. Since Az = 10^{-b}Aw, interconverting between Az and Aw is trivial. Alice and Bob’s procedure for computing Aw is as follows:

  • Alice sends Bob w.
  • Having received w, Bob forms Yw and sends it to Alice.
  • Having received Yw, Alice computes v\coloneqq Xw+Yw and sends it to Bob.
  • Having received v, Bob computes Y^\top v and sends its to Alice.
  • Alice forms Aw = X^\top v + Y^\top v.

Because X and Y are \sqrt{n}\times \sqrt{n} and have \pm 1 entries, all vectors computed in this procedure are vectors of length \sqrt{n} with integer entries between -4n 10^b and 4n10^b. We conclude the communication cost for one matvec is T\coloneqq\Theta((b+\log n)\sqrt{n}) bits.

Analysis

Consider an algorithm we’ll call BestTraceAlgorithm. Given any accuracy parameter \varepsilon > 0, BestTraceAlgorithm requires at most m = m(\varepsilon) matvecs and, for any positive semidefinite input matrix A of any size, produces an estimate \hat{\tr} satisfying

(3)   \[\expect\left[\frac{|\hat{\tr}-\tr(A)|}{\tr(A)}\right] \le \varepsilon.\]

We assume that BestTraceAlgorithm is the best possible algorithm in the sense that no algorithm can achieve (3) on all (positive semidefinite) inputs with m' < m matvecs.

To solve the Gap-Hamming problem, Alice and Bob just need enough accuracy in their trace estimation to distinguish between cases 0 and 1. In particular, if

    \[\left| \frac{\hat{\tr} - \tr(A)}{\tr(A)} \right| \le \frac{1}{\sqrt{n}},\]

then Alice and Bob can distinguish between cases 0 and 1 in (2′)

Suppose that Alice and Bob apply trace estimation to solve the Gap-Hamming problem, using m matvecs in total. The total communication is m\cdot T = \order(m(b+\log n)\sqrt{n}) bits. Chakrabati and Regev showed that Gap-Hamming requires cn bits of communication (for some c>0) to solve the Gap-Hamming problem with 2/3 probability. Thus, if m\cdot T < cn, then Alice and Bob fail to solve the Gap-Hamming problem with at least 1/3 probability. Thus,

    \[\text{If } m < \frac{cn}{T} = \Theta\left( \frac{\sqrt{n}}{b+\log n} \right), \quad \text{then } \left| \frac{\hat{\tr} - \tr(A)}{\tr(A)} \right| > \frac{1}{\sqrt{n}} \text{ with probability at least } \frac{1}{3}.\]

The contrapositive of this statement is that if

    \[\text{If }\left| \frac{\hat{\tr} - \tr(A)}{\tr(A)} \right| \le \frac{1}{\sqrt{n}}\text{ with probability at least } \frac{2}{3}, \quad \text{then } m \ge \Theta\left( \frac{\sqrt{n}}{b+\log n} \right).\]


Say Alice and Bob run BestTraceAlgorithm with parameter \varepsilon = \tfrac{1}{3\sqrt{n}}. Then, by (3) and Markov’s inequality,

    \[\left| \frac{\hat{\tr} - \tr(A)}{\tr(A)} \right| \le \frac{1}{\sqrt{n}} \quad \text{with probability at least }\frac{2}{3}.\]

Therefore, BestTraceAlgorithm requires at least

    \[m \ge \Theta\left( \frac{\sqrt{n}}{b+\log n} \right) \text{ matvecs}.\]

Using the fact that we’ve set \varepsilon = 1/3\sqrt{n}, we conclude that any trace estimation algorithm, even BestTraceAlgorithm, requires

    \[m \ge \Theta \left( \frac{1}{\varepsilon (b+\log(1/\varepsilon))} \right) \text{ matvecs}.\]

In particular, no trace estimation algorithm can achieve mean relative error \varepsilon using even \order(1/\varepsilon^{0.999}) matvecs. This proves the MMMW theorem.

Stochastic Trace Estimation

I am delighted to share that me, Joel A. Tropp, and Robert J. Webber‘s paper XTrace: Making the Most of Every Sample in Stochastic Trace Estimation has recently been released as a preprint on arXiv. In it, we consider the implicit trace estimation problem:

Implicit trace estimation problem: Given access to a square matrix A via the matrix–vector product operation \omega \mapsto A\omega, estimate its trace \tr A = \sum_{i=1}^n A_{ii}.

Algorithms for this task have many uses such as log-determinant computations in machine learning, partition function calculations in statistical physics, and generalized cross validation for smoothing splines. I described another application to counting triangles in a large network in a previous blog post.

Our paper presents new trace estimators XTrace and XNysTrace which are highly efficient, producing accurate trace approximations using a small budget of matrix–vector products. In addition, these algorithms are fast to run and are supported by theoretical results which explain their excellent performance. I really hope that you will check out the paper to learn more about these estimators!

For the rest of this post, I’m going to talk about the most basic stochastic trace estimation algorithm, the GirardHutchinson estimator. This seemingly simple algorithm exhibits a number of nuances and forms the backbone for more sophisticated trace estimates such as Hutch++, Nyström++, XTrace, and XNysTrace. Toward the end, this blog post will be fairly mathematical, but I hope that the beginning will be fairly accessible to all.

Girard–Hutchinson Estimator: The Basics

The GirardHutchinson estimator for the trace of a square matrix A is

(1)   \[\hat{\tr} = \frac{1}{m} \sum_{i=1}^m \omega_i^* A \omega_i. \]

Here, \omega_1,\ldots,\omega_m are random vectors, usually chosen to be statistically independent, and {}^* denotes the conjugate transpose of a vector or matrix. The Girard–Hutchinson estimator only depends on the matrix A through the matrix–vector products A\omega_1,\ldots,A\omega_m.

Unbiasedness

Provided the random vectors are isotropic

(2)   \[\mathbb{E} [\omega_i\omega_i^*] = I, \]

the Girard–Hutchinson estimator is unbiased:

(3)   \[\mathbb{E} [\hat{\tr}] = \tr A.\]

Let us confirm this claim in some detail. First, we use linearity of expectation to evaluate

(4)   \[\mathbb{E} [\hat{\tr}] = \mathbb{E} \left[ \frac{1}{m} \sum_{i=1}^m \omega_i^*A\omega_i \right] = \frac{1}{m} \sum_{i=1}^m \mathbb{E} \left[ \omega_i^* A \omega_i\right]. \]

Therefore, to prove that \mathbb{E} [\hat{\tr}] = \tr A, it is sufficient to prove that \mathbb{E} \left[\omega_i^*A\omega_i\right] = \tr A for each i.

When working with traces, there are two tricks that solve 90% of derivations. The first trick is that, if we view a number as a 1\times 1 matrix, then a number equals its trace, x = \tr x. The second trick is the cyclic property: For a k\times p matrix B and a p\times k matrix C, we have \tr (BC) = \tr (CB). The cyclic property should be handled with care when one works with a product of three or more matrices. For instance, we have

    \[\tr[BCD] = \tr[(BC)D] = \tr[D(BC)] = \tr[DBC].\]

However,

    \[\tr [BCD] \ne \tr[CBD] \quad \text{in general}.\]

One should think of the matrix product BCD as beads on a closed loop of string. One can move the last bead D to the front of the other two, \tr [BCD] = \tr[DBC], but not interchange two beads, \tr[BCD] \ne \tr[CBD].

With this trick in hand, let’s return to proving that \mathbb{E} \left[\omega_i^*A\omega_i\right] = \tr A for every i. Apply our two tricks:

    \[\mathbb{E} \left[\omega_i^*A\omega_i\right] = \mathbb{E} \tr \left[\omega_i^*A\omega_i\right] = \mathbb{E} \tr \left[A\omega_i\omega_i^*\right].\]

The expectation is a linear operation and the matrix A is non-random, so we can bring the expectation into the trace as

    \[\mathbb{E} \left[\omega_i^*A\omega_i\right] = \mathbb{E} \tr \left[A\omega_i\omega_i^*\right] = \tr(A \mathbb{E}[\omega_i\omega_i^*] ).\]

Invoke the isotropy condition (2) and conclude:

    \[\mathbb{E} \left[\omega_i^*A\omega_i\right] = \tr(A \mathbb{E}[\omega_i\omega_i^*] ) = \tr(A\cdot I) = \tr A.\]

Plugging this into (4) confirms the unbiasedness claim (3).

Variance

Continue to assume that the \omega_i‘s are isotropic (3) and now assume that \omega_1,\ldots,\omega_m are independent. By independence, the variance can be written as

    \[\Var(\hat{\tr}) = \frac{1}{m^2} \sum_{i=1}^m \Var(\omega_i^*A\omega_i).\]

Assuming that \omega_1,\ldots,\omega_m are identically distributed \omega_1,\ldots,\omega_m \sim \omega, we then get

    \[\Var(\hat{\tr}) = \frac{1}{m} \Var(\omega^*A\omega).\]

The variance decreases like 1/m, which is characteristic of Monte Carlo-type algorithms. Since \hat{\tr} is unbiased (i.e, (3)), this means that the mean square error decays like 1/m so the average error (more precisely root-mean-square error) decays like

    \[\left| \hat{\tr} - \tr A \right| \lessapprox \frac{\mathrm{const}}{\sqrt{m}}.\]

This type of convergence is very slow. If I want to decrease the error by a factor of 10, I must do 100\times the work!

Variance-reduced trace estimators like Hutch++ and our new trace estimator XTrace improve the rate of convergence substantially. Even in the worst case, Hutch++ and XTrace reduce the variance at a rate 1/m^2 and (root-mean-square) error at rates 1/m:

    \[\Var(\hat{\tr}_{\text{H++ or X}}) \le \frac{\mathrm{const}}{m^2},\quad \left| \hat{\tr}_{\text{H++ or X}} - \tr A \right| \lessapprox \frac{\mathrm{const}}{m}.\]

For matrices with rapidly decreasing singular values, the variance and error can decrease much faster than this.

Variance Formulas

As the rate of convergence for the Girard–Hutchinson estimator is so slow, it is imperative to pick a distribution on test vectors \omega that makes the variance of the single–sample estimate \omega^*A\omega as low as possible. In this section, we will provide several explicit formulas for the variance of the Girard–Hutchinson estimator. Derivations of these formulas will appear at the end of this post. These variance formulas help illuminate the benefits and drawbacks of different test vector distributions.

To express the formulas, we will need some notation. For a complex number z = a + bi we use \Re(z) = a and \Im(z) = b to denote the real and imaginary parts. The variance of a random complex number z is

    \[\Var(z) := \mathbb{E} |z - \mathbb{E} z|^2 = \Var(\Re z) + \Var(\Im z).\]

The Frobenius norm of a matrix A is

    \[\left\|A\right\|_{\rm F}^2 = \sum_{i,j} |A_{ij}|^2.\]

If A is real symmetric or complex Hermitian with (real) eigenvalues \lambda_1,\ldots,\lambda_n, we have

(5)   \[\left\|A\right\|_{\rm F}^2 = \sum_{i=1}^n \lambda_i^2. \]

A^\top denotes the ordinary transpose of A and A^* denotes the conjugate transpose of A.

Real-Valued Test Vectors

We first focus on real-valued test vectors \omega. Since \omega is real, we can use the ordinary transpose {}^\top rather than the conjugate transpose {}^*. Since \omega^\top A\omega is a number, it is equal to its own transpose:

    \[\omega^\top A \omega = (\omega^\top A \omega)^\top = \omega^\top A^\top \omega.\]

Therefore,

    \[\omega^\top A\omega = \frac{\omega^\top A \omega + \omega^\top A^\top \omega}{2} = \omega^\top \left( \frac{A + A^\top}{2} \right)\omega.\]

The Girard–Hutchinson trace estimator applied to A is the same as the Girard–Hutchinson estimator applied to the symmetric part of A, (A+A^\top)/2.

For the following results, assume A is symmetric, A = A^\top.

  1. Real Gaussian: \omega_1,\ldots,\omega_m are independent standard normal random vectors.

        \[\Var(\omega^\top A\omega) = 2 \left\|A\right\|_{\rm F}^2.\]

  2. Uniform signs (Rademachers): \omega_1,\ldots,\omega_m are independent random vectors with uniform \pm 1 coordinates.

        \[\Var(\omega^\top A \omega) = 2\sum_{i\ne j} |A_{ij}|^2.\]

  3. Real sphere: Assume \omega_1,\ldots,\omega_n are uniformly distributed on the real sphere of radius \sqrt{n}: \omega \sim \text{Uniform} \{x\in \mathbb{R}^n : x^\top x = n\}.

        \[\Var(\omega^\top A\omega) = \frac{2n}{n+2} \left( \left\|A\right\|_{\rm F}^2 - \frac{1}{n} |\tr A|^2 \right).\]

These formulas continue to hold for nonsymmetric A by replacing A by its symmetric part (A+A^\top)/2 on the right-hand sides of these variance formulas.

Complex-Valued Test Vectors

We now move our focus to complex-valued test vectors \omega. As a rule of thumb, one should typically expect that the variance for complex-valued test vectors applied to a real symmetric matrix A is about half the natural real counterpart—e.g., for complex Gaussians, you get about half the variance than with real Gaussians.

A square complex matrix has a Cartesian decomposition

    \[A = A^{\rm H} + i A^{\rm SH}\]

where

    \[A^{\rm H} = \frac{A+A^*}{2} ,\quad A^{\rm SH} = \frac{A - A^*}{2i}\]

denote the Hermitian and skew-Hermitian parts of A. Similar to how the imaginary part of a complex number is real, the skew-Hermitian part of a complex matrix is Hermitian (and i A^{\rm SH} is skew-Hermitian). Since A^{\rm H} and A^{\rm SH} are both Hermitian, we have

    \[\Re(\omega^* A\omega) = \omega^* A^{\rm H} \omega, \quad \Im (\omega^* A \omega) = \omega^* A^{\rm SH} \omega.\]

Consequently, the variance of \omega^*A \omega can be broken into Hermitian and skew-Hermitian parts:

    \[\Var(\omega^* A\omega) = \Var(\omega^* A^{\rm H}\omega) + \Var(\omega^* A^{\rm SH}\omega).\]

For this reason, we will state the variance formulas only for Hermitian A, with the formula for general A following from the Cartesian decomposition.

For the following results, assume A is Hermitian, A = A^*.

  1. Complex Gaussian: \omega_1,\ldots,\omega_m are independent standard complex random vectors, i.e., each \omega_i has iid entries distributed as (g_1+ig_2)/\sqrt{2} for g_1,g_2 standard normal random variables.

        \[\Var(\omega^* A\omega) = \left\|A\right\|_{\rm F}^2.\]

  2. Uniform phases (Steinhauses): \omega_1,\ldots,\omega_m are independent random vectors whose entries are uniform on the complex unit circle \{ z \in \complex : |z| \}.

        \[\Var(\omega^* A \omega) = \sum_{i\ne j} |A_{ij}|^2.\]

  3. Complex sphere: Assume \omega_1,\ldots,\omega_n are uniformly distributed on the complex sphere of radius \sqrt{n}: \omega \sim \text{Uniform} \{x\in \complex^n : x^* x = n\}.

        \[\Var(\omega^* A\omega) = \frac{n}{n+1} \left( \left\|A\right\|_{\rm F}^2 - \frac{1}{n} |\tr A|^2 \right).\]

Optimality Properties

Let us finally address the question of what the best choice of test vectors is for the Girard–Hutchinson estimator. We will state two results with different restrictions on \omega_1,\ldots,\omega_m.

Our first result, due to Hutchinson, is valid for real symmetric matrices with real test vectors.

Optimality (independent test vectors with independent coordinates). If the test vectors \omega_1,\ldots,\omega_m \in \mathbb{R}^n are isotropic (2), independent from each other, and have independent entries, then for any fixed real symmetric matrix A, the minimum variance for \hat{\tr} is obtained when \omega_1,\ldots,\omega_m are populated with random signs (\omega_i)_j \sim \textnormal{Uniform} \{\pm 1\}.

The next optimality results will have real and complex versions. To present the results for \mathbb{R}-valued and an \complex-valued test vectors on unified footing, let \field denote either \mathbb{R} or \complex. We let a \field-Hermitian matrix be either a real symmetric matrix (if \field = \mathbb{R}) or a complex Hermitian matrix (if \field = \complex). Let a \field-unitary matrix be either a real orthogonal matrix (if \field = \mathbb{R}) or a complex unitary matrix (if \field = \complex).

The condition that the vectors \omega_1,\ldots,\omega_m have independent entries is often too restrictive in practice. It rules out, for instance, the case of uniform vectors on the sphere. If we relax this condition, we get a different optimal distribution:

Optimality (independent test vectors). Consider any set \mathscr{A} of \field-Hermitian matrices which is invariant under \field-unitary similary transformations:

    \[\text{If $A \in \mathscr{A}$ and $U$ is $\field$-unitary, then $U^*AU \in \mathscr{A}$.}\]

Assume that the test vectors \omega_1,\ldots,\omega_m are independent and isotropic (2). The worst-case variance \sup_{A \in \mathscr{A}} \Var(\hat{\tr}) is minimized by choosing \omega_1,\ldots,\omega_m uniformly on the \field-sphere: \omega_1,\ldots,\omega_m \sim \text{Uniform} \{ x \in \field^n : x^*x =n \}.

More simply, if you wants your stochastic trace estimator to be effective for a class of inputs \mathscr{A} (closed under \field-unitary similarity transformations) rather than a single input matrix A, then the best distribution are test vectors drawn uniformly from the sphere. Examples of classes of matrices \mathscr{A} include:

  • Fixed eigenvalues. For fixed real eigenvalues \lambda_1,\ldots,\lambda_n \in \mathbb{R}, the set of all \field-Hermitian matrices with these eigenvalues.
  • Density matrices. The class of all trace-one psd matrices.
  • Frobenius norm ball. The class of all \field-Hermitian matrices of Frobenius norm at most 1.

Derivation of Formulas

In this section, we provide derivations of the variance formulas. I have chosen to focus on derivations which are shorter but use more advanced techniques rather than derivations which are longer but use fewer tricks.

Real Gaussians

First assume A is real. Since A is real symmetric, A has an eigenvalue decomposition A = Q\Lambda Q^\top, where Q is orthogonal and \Lambda is a diagonal matrix reporting A‘s eigenvalues. Since the real Gaussian distribution is invariant under orthogonal transformations, \omega^\top A\omega = (Q^\top \omega)^\top \Lambda (Q^\top\omega) has the same distribution as \omega^\top \Lambda \omega. Therefore,

    \[\Var(\omega^\top A \omega) = \Var(\omega^\top \Lambda \omega) = \Var \left( \sum_{i=1}^n \lambda_i \omega_i^2 \right) = \sum_{i=1}^n \lambda_i^2 \Var(\omega_i^2) = 2\sum_{i=1}^n \lambda_i^2 = 2\left\|A\right\|_{\rm F}^2.\]

Here, we used that the variance of a squared standard normal random variable is two.

For A non-real matrix, we can break the matrix A into its entrywise real and imaginary parts A = \mathfrak{R}(A) + i \, \mathfrak{I}(A). Thus,

    \[\Var(\omega^\top A \omega) = \Var(\omega^\top \mathfrak{R}(A) \omega) + \Var(\omega^\top \mathfrak{I}(A) \omega) = 2\left\|\mathfrak{R}(A)\right\|_{\rm F}^2 + 2\left\|\mathfrak{I}(A)\right\|_{\rm F}^2 = 2\left\|A\right\|_{\rm F}^2.\]

Uniform Signs

First, compute

    \[\omega^\top A \omega - \mathbb{E}[\omega^\top A \omega] = \sum_{i,j=1}^n A_{ij} \omega_i\omega_j - \sum_{i=1}^n A_{ii} = \sum_{i\ne j} A_{ij} \omega_i\omega_j + \sum_{i=1}^n A_{ii}(\omega_i^2-1).\]

For a vector \omega of uniform random signs, we have \omega_i^2 = 1 for every i, so the second sum vanishes. Note that we have assumed A symmetric, so the sum over i\ne j can be replaced by two times the sum over i < j:

    \[\omega^\top A \omega - \mathbb{E}[\omega^\top A \omega] = 2\sum_{i< j} A_{ij} \omega_i\omega_j.\]

Note that \{ \omega_i \omega_j : i < j\} are pairwise independent. As a simple exercise, one can verify that the identity

    \[\Var(a_1 X_1+\cdots+a_kX_k) = |a_1|^2 \Var(X_1) + \cdots + |a_k|^2 \Var(X_k)\]

holds for any pairwise independent family of random variances X_1,\ldots,X_k and numbers a_1,\ldots,a_k. Ergo,

    \begin{align*}\Var(\omega^\top A\omega) &= \Var(\omega^\top A \omega - \mathbb{E}[\omega^\top A \omega]) \\&= \Var\left(\sum_{i< j} 2A_{ij} \omega_i\omega_j\right) \\&= \sum_{i<j} 4 |A_{ij}|^2 \Var(\omega_i\omega_j) \\&= \sum_{i<j} 4 |A_{ij}|^2 \\&= 2 \sum_{i\ne j} |A_{ij}|^2.\end{align*}

In the second-to-last line, we use the fact that \omega_i\omega_j is a uniform random sign, which has variance 1. The final line is a consequence of the symmetry of A.

Uniform on the Real Sphere

The simplest proof is I know is by the “camel principle”. Here’s the story (a lightly edited quotation from MathOverflow):

A father left 17 camels to his three sons and, according to the will, the eldest son was to be given a half of the camels, the middle son one-third, and the youngest son the one-ninth. The sons did not know what to do since 17 is not evenly divisible into either two, three, or nine parts, but a wise man helped the sons: he added his own camel, the oldest son took 18/2=9 camels, the second son took 18/3=6 camels, the third son 18/9=2 camels and the wise man took his own camel and went away.

We are interested in a vector \omega which is uniform on the sphere of radius \sqrt{n}. Performing averages on the sphere is hard, so we add a camel to the problem by “upgrading” \omega to a spherically symmetric vector g which has a random length. We want to pick a distribution for which the computation \Var(g^\top A g) is easy. Fortunately, we already know such a distribution, the Gaussian distribution, for which we already calculated \Var(g^\top A g) = 2\left\|A\right\|_{\rm F}^2.

The Gaussian vector g and the uniform vector \omega on the sphere are related by

    \[g = \sqrt{\frac{a}{n}} \omega,\]

where a is the squared length of the Gaussian vector g. In particular, a has the distribution of the sum of n squared Gaussian random variables, which is known as a \chi^2 random variable with n degrees of freedom.

Now, we take the camel back. Compute the variance of g^\top A g using the chain rule for variance:

    \[\Var(g^\top A g) = \mathbb{E}[\Var(g^\top A g \mid a)] + \Var(\mathbb{E}[g^\top A g \mid a]).\]

Here, \Var(\cdot \mid a) and \mathbb{E}[ \cdot \mid a] denote the conditional variance and conditional expectation with respect to the random variable a. The quick and dirty ways of working with these are to treat the random variable a “like a constant” with respect to the conditional variance and expectation.

Plugging in the formula g = \sqrt{a/n} \cdot \omega and treating a “like a constant”, we obtain

    \begin{align*}\Var(g^\top A g) &= \mathbb{E}[\Var(a/n \cdot \omega^\top A \omega \mid a)] + \Var(\mathbb{E}[a/n \cdot \omega^\top A \omega \mid a]) \\&=\mathbb{E}[(a/n)^2\Var(\omega^\top A \omega)] + \Var(a/n \cdot \mathbb{E}[\omega^\top A \omega]) \\&= \frac{1}{n^2} \mathbb{E}[a^2] \cdot \Var(\omega^\top A \omega) + \frac{1}{n^2} \Var(a) |\mathbb{E} [\omega^\top A \omega]|^2.\end{align*}

As we mentioned, a is a \chi^2 random variable with n degrees of freedom and \mathbb{E}[a^2] and \Var(a) are known quantities that can be looked up:

    \[\mathbb{E}[a^2] = n(n+2), \quad \Var(a) = 2n.\]

We know \Var(g^\top A g) = 2\left\|A\right\|_{\rm F}^2 and \mathbb{E} [\omega^\top A \omega] = \tr A. Plugging these all in, we get

    \[2\left\|A\right\|_{\rm F}^2 = \frac{n+2}{n} \Var(\omega^\top A\omega) + \frac{2}{n} |\tr A|^2.\]

Rearranging, we obtain

    \[\Var(\omega^\top A\omega) = \frac{2n}{n+2} \left( \left\|A\right\|_{\rm F}^2 - \frac{1}{n}|\tr A|^2\right).\]

Complex Gaussians

The trick is the same as for real Gaussians. By invariance of complex Gaussian random vectors under unitary transformations, we can reduce to the case where A is a diagonal matrix populated with eigenvalues \lambda_1,\ldots,\lambda_n. Then

    \[\Var(\omega^*A \omega) = \Var \left( \sum_{i=1}^n \lambda_i |\omega_i|^2 \right) = \sum_{i=1}^n \Var(|\omega_i|^2) \lambda_i^2 = \sum_{i=1}^n \lambda_i^2 = \left\|A\right\|_{\rm F}^2.\]

Here, we use the fact that 2|\omega_i|^2 is a \chi^2 random variable with two degrees of freedom, which has variance four.

Random Phases

The trick is the same as for uniform signs. A short calculation (remembering that A is Hermitian and thus \overline{A_{ij}} = A_{ji}) reveals that

    \[\Var\left( \omega^* A \omega \right) = \Var \left( \sum_{i<j} 2 \Re(A_{ij} \overline{\omega_i} \omega_j) \right).\]

The random variables \{\overline{\omega_i} \omega_j : i < j\} are pairwise independent so we have

    \[\Var\left( \omega^* A \omega \right) = \Var \left( \sum_{i<j} 2 \Re(A_{ij} \overline{\omega_i} \omega_j) \right) = 4\sum_{i<j} \Var \left( \Re(A_{ij} \overline{\omega_i} \omega_j) \right).\]

Since \overline{\omega}_i \omega_j is uniformly distributed on the complex unit circle, we can assume without loss of generality that A_{ij} = |A_{ij}|. Thus, letting \phi be uniform on the complex unit circle,

    \[\Var\left( \omega^* A \omega \right) = 4\sum_{i<j} \Var \left( |A_{ij}|\Re(\phi)) \right) = 4\Var\left( \Re(\phi) \right)\sum_{i<j}|A_{ij}|^2.\]

The real and imaginary parts of \phi have the same distribution so

    \[1 = \Var(\phi) = \Var(\Re \phi) + \Var(\Im \phi) = 2 \Var(\Re \phi)\]

so \Var(\Re \phi) = 1/2. Thus

    \[\Var\left( \omega^* A \omega \right) = 2 \sum_{i<j}|A_{ij}|^2 = \sum_{i\ne j} |A_{ij}|^2.\]

Uniform on the Complex Sphere: Derivation 1 by Reduction to Real Case

There are at least three simple ways of deriving this result: the camel trick, reduction to the real case, and Haar integration. Each of these techniques illustrates a trick that is useful in its own right beyond the context of trace estimation. Since we have already seen an example of the camel trick for the real sphere, I will present the other two derivations.

Let us begin with the reduction to the real case. Let \mathfrak{R}(\cdot) and \mathfrak{I}(\cdot) denote the real and imaginary parts of a vector or matrix, taken entrywise. The key insight is that if \omega is a uniform random vector on the complex sphere of radius \sqrt{n}, then

    \[\mathscr{R}(\omega) := \twobyone{\mathfrak{R}(\omega)}{\mathfrak{I}(\omega)}\in\real^{2n} \quad \text{is a uniform random vector on the real sphere of radius $\sqrt{n}$}.\]

We’ve converted the complex vector \omega into a real vector \mathscr{R}(\omega).

Now, we need to convert the complex matrix A into a real matrix \mathscr{R}(A). To do this, recall that one way of representing complex numbers is by 2\times 2 matrices:

    \[a + bi \iff \twobytwo{a}{-b}{b}{a}.\]

Using this correspondence addition and multiplication of complex numbers can be carried by addition and multiplication of the corresponding matrices.

To convert complex matrices to real matrices, we use a matrix-version of the same representation:

    \[\mathscr{R}(A) = \twobytwo{\mathfrak{R}(A)}{-\mathfrak{I}(A)}{\mathfrak{I}(A)}{\mathfrak{R}(A)}.\]

One can check that addition and multiplication of complex matrices can be carried out by addition and multiplication of the corresponding “realified” matrices, i.e.,

    \[\mathscr{R}(A + B) = \mathscr{R}(A) + \mathscr{R}(B), \quad \mathscr{R}(A\cdot B) = \mathscr{R}(A) \cdot \mathscr{R}(B)\]

holds for all complex matrices A and B.

We’ve now converted complex matrix A and vector \omega into real matrix \mathscr{R}(A) and vector \mathscr{R}(\omega). Let’s compare \omega^*A\omega to \mathscr{R}(\omega)^\top\mathscr{R}(A)\mathscr{R}(\omega). A short calculation reveals

    \[\omega^*A\omega = \mathscr{R}(\omega)^\top \mathscr{R}(A)\mathscr{R}(\omega) .\]

Since \mathscr{R}(\omega) is a uniform random vector on the sphere of radius \sqrt{n}, \sqrt{2}\cdot \mathscr{R}(\omega) is a uniform random vector on the sphere of radius \sqrt{2n}. Thus, by the variance formula for the real sphere, we get

    \[\Var(\omega^*A\omega) = \Var[(\sqrt{2}\mathscr{R}(\omega))^\top (\mathscr{R}(A)/2)(\sqrt{2}\mathscr{R}(\omega) )] = \frac{4n}{2n+2} \left[ \|\mathscr{R}(A)/2\|_{\rm F}^2 - \frac{1}{8n}(\tr\mathscr{R}(A))^2 \right].\]

A short calculation verifies that \tr \mathscr{R}(A) = 2\tr A and \norm{\mathscr{R}(A)}_{\rm F}^2 = 2\|A\|_{\rm F}^2. Plugging this in, we obtain

    \[\Var(\omega^*A\omega)= \frac{n}{n+1} \left[ \|A\|_{\rm F}^2 - \frac{1}{n}(\tr A)^2  \right].\]

Uniform on the Complex Sphere: Derivation 2 by Haar Integration

The proof by reduction to the real case requires some cumbersome calculations and requires that we have already computed the variance in the real case by some other means. The method of Haar integration is more slick, but it requires some pretty high-power machinery. Haar integration may be a little bit overkill for this problem, but this technique is worth learning as it can handle some truly nasty expected value computations that appear, for example, in quantum information.

We seek to compute

    \[\mathbb{E} [(\omega^*A \omega)^2].\]

The first trick will be to write this expession using a single matrix trace using the tensor (Kronecker) product \otimes. For those unfamiliar with the tensor product, the main properties we will be using are

(6)   \[(A\otimes B) (C\otimes D) = (AB) \otimes (CD), \quad \tr(A\otimes B) = \tr A \cdot \tr B. \]

We saw in the proof of unbiasedness that

    \[\omega^* A \omega = \tr (\omega^*A\omega) = \tr (A \omega\omega^*).\]

Therefore, by (6),

    \[(\omega^*A\omega)^2 = (\tr [A \omega\omega^*])^2 = \tr [A\omega\omega^* \otimes A\omega\omega^*] = \tr [(A\otimes A) (\omega\omega^* \otimes \omega\omega^*)].\]

Thus, to evaluate \mathbb{E}[(\omega^*A\omega)^2], it will be sufficient to evaluate \mathbb{E}[\omega\omega^* \otimes \omega\omega^*]. Forunately, there is a useful formula for these expectation provided by a field of mathematics known as representation theory (see Lemma 1 in this paper):

    \[\mathbb{E}[ \omega\omega^* \otimes \omega\omega^*] = \frac{2n}{n+1} \operatorname{Proj}_{\operatorname{Sym}^2(\complex^n)}.\]

Here, \operatorname{Proj}_{\operatorname{Sym}^2(\complex^n)} is the orthogonal projection onto the space of symmetric two-tensors \operatorname{Sym}^2(\complex^n) = \operatorname{span} \{ v \otimes v : v \in \complex^n \}. Therefore, we have that

    \[\mathbb{E}[(\omega^*A\omega)^2] = \tr [(A\otimes A) \mathbb{E}(\omega\omega^* \otimes \omega\omega^*)] = \frac{2n}{n+1} \tr [(A\otimes A) \operatorname{Proj}_{\operatorname{Sym}^2(\complex^n)}].\]

To evalute the trace on the right-hand side of this equation, there is another formula (see Lemma 6 in this paper):

    \[\tr \left[(A\otimes B) \operatorname{Proj}_{\operatorname{Sym}^2(\complex^n)}\right] = \frac{1}{2} \left( \tr(AB) + \tr A \cdot \tr B \right).\]

Therefore, we conclude

    \begin{align*}\Var(\omega^* A \omega) &= \mathbb{E}[(\omega^*A\omega)^2] - (\mathbb{E}[\omega^*A\omega])^2 \\&= \frac{2n}{n+1}\tr [(A\otimes A) \operatorname{Proj}_{\operatorname{Sym}^2(\complex^n)}] - (\tr A)^2 \\&= \frac{n}{n+1}\left[ \tr A^2 + (\tr A)^2 \right] - (\tr A)^2 \\&= \frac{n}{n+1}\left[ \left\|A\right\|_{\rm F}^2 - \frac{1}{n} (\tr A)^2 \right].\end{align*}

Proof of Optimality Properties

In this section, we provide proofs of the two optimality properties.

Optimality: Independent Vectors with Independent Coordinates

Assume A is real and symmetric and suppose that \omega is isotropic (2) with independent coordinates. The isotropy condition

    \[\mathbb{E}[\omega\omega^\top] = I\]

implies that \mathbb{E}[\omega_i\omega_j] = \delta_{ij}, where \delta is the Kronecker symbol. Using this fact, we compute the second moment:

    \begin{align*}\mathbb{E}[ (\omega^*A \omega)^2] &= \mathbb{E}\left[ \left( \sum_{i=1}^n A_{ii} \omega_i^2 +2 \sum_{i<j} A_{ij}\omega_i\omega_j) \right)^2\right] \\&= \sum_{i=1}^n A_{ii}^2 \mathbb{E}[\omega_i^4] + \sum_{i<j} (2A_{ii}A_{jj}+4A_{ij}^2) \mathbb{E}[\omega_i^2]\mathbb{E}[\omega_j^2] \\&= \sum_{i=1}^n A_{ii}^2 \mathbb{E}[\omega_i^4] + \sum_{i<j} (2A_{ii}A_{jj}+4A_{ij}^2) .\end{align*}

Thus

    \[\Var(\omega^*A\omega) = \mathbb{E}[ (\omega^*A \omega)^2] - (\mathbb{E}[\omega^* A \omega])^2 = \sum_{i=1}^n A_{ii}^2 (\mathbb{E}[|\omega_i|^4]-1) + 4\sum_{i<j} A_{ij}^2.\]

The variance is minimized by choosing \omega with \mathbb{E} \omega_i^4 as small as possible. Since \mathbb{E} \omega_i^2 = 1, the smallest possible value for \mathbb{E} \omega_i^4 is \mathbb{E} \omega_i^4 = 1, which is obtained by populating \omega with random signs.

Optimality: Independent Vectors

This result appears to have first been proven by Richard Kueng in unpublished work. We use an argument suggested to me by Robert J. Webber.

Assume \mathscr{A} is a class of \field-Hermitian matrices closed under \field-unitary similarity transformations and that \omega is an isotropic random vector (2). Decompose the test vector as

    \[\omega = a \cdot s \quad \text{for} \quad a \in [0,+\infty), \: s \in\{x\in \field^n : x^*x = n \}.\]

First, we shall show that the variance is reduced by replacing s with a vector t drawn uniformly from the sphere

(7)   \[\sup_{A\in\mathscr{A}} \Var(\tilde{\omega}^*A\tilde{\omega}) \le \sup_{A\in\mathscr{A}} \Var(\omega^*A\omega \]

where

(8)   \[\tilde{\omega} = a\cdot t \quad \text{and}\quad t\sim \text{Uniform} \{ x \in \field^n :x^*x = n \} \quad \text{is independent of $a$}. \]

Note that such a t can be generated as t = Qs for a uniformly random \field-unitary matrix Q. Therefore, we have

    \begin{align*}\sup_{A\in\mathscr{A}} \Var(\tilde{\omega}^*A\tilde{\omega})&= \sup_{A\in\mathscr{A}} \left[\mathbb{E}[(\tilde{\omega}^*A\tilde{\omega})^2] - (\tr A)^2\right]\\&= \sup_{A\in\mathscr{A}} \left[\mathbb{E}[a^2 \cdot s^*(Q^*AQ)s] - (\tr (Q^*AQ))^2\right].\end{align*}

Now apply Jensen’s inequality only over the randomness in Q to obtain

    \begin{align*}\sup_{A\in\mathscr{A}} \Var(\tilde{\omega}^*A\tilde{\omega})&= \sup_{A\in\mathscr{A}} \left[\mathbb{E}[a^2 \cdot s^*(Q^*AQ)s] - (\tr (Q^*AQ))^2\right] \\&\le \mathbb{E}_Q \sup_{A\in\mathscr{A}} \left[\mathbb{E}_{a,s}[a^2 \cdot s^*(Q^*AQ)s] - (\tr (Q^*AQ))^2\right].\end{align*}

Finally, note that since \mathscr{A} is closed under \field-unitary similarity transformations, the supremum over Q^*AQ for A \in \mathscr{A} is the same as the supremum of A \in \mathscr{A}, so we obtain

    \begin{align*}\sup_{A\in\mathscr{A}} \Var(\tilde{\omega}^*A\tilde{\omega})&\le \mathbb{E}_Q \sup_{A\in\mathscr{A}} \left[\mathbb{E}_{a,s}[a^2 \cdot s^*(Q^*AQ)s] - (\tr (Q^*AQ))^2\right] \\&= \mathbb{E}_Q \sup_{A\in\mathscr{A}} \left[\mathbb{E}_{a,s}[a^2 \cdot s^*As] - (\tr A)^2\right] \\&= \sup_{A\in\mathscr{A}} \Var(\omega^*A\omega).\end{align*}

We have successfully proven (7). This argument is a specialized version of a far more general result which appears as Proposition 4.1 in this paper.

Next, we shall prove

(9)   \[\sup_{A\in\mathscr{A}} \Var(t^*At) \le \sup_{A\in\mathscr{A}} \Var(\tilde{\omega}^*A\tilde{\omega}), \]

where t is still defined as in (8). Indeed, using the chain rule for variance, we obtain

    \begin{align*}\Var(\tilde{\omega}^*A\tilde{\omega})&= \Var(a^2\cdot t^*At) \\&= \mathbb{E}[\Var(a^2\cdot t^* A t \mid a)] + \Var(\mathbb{E}[a^2\cdot t^* A t \mid a]) \\&= \mathbb{E}[a^4]\Var(t^* A t )+ (\tr A)^2\Var(a^2) \\&\ge \mathbb{E}[a^4]\Var(t^* A t ).\end{align*}

Here, we have used that t is uniform on the sphere and thus \mathbb{E}[t^*At] = \tr A. By definition, a is the length of \omega divided by \sqrt{n}. Therefore,

    \[\mathbb{E}[a^2] = \frac{1}{n}\mathbb{E}[\omega^*\omega] = \frac{1}{n} \mathbb{E}[\tr (\omega\omega^*)] = \frac{1}{n} \tr (\mathbb{E}[\omega\omega^*]) = \frac{\tr I}{n} = 1.\]

Therefore, by Jensen’s inequality,

    \[\mathbb{E}[a^4] = \mathbb{E}[(a^2)^2] \ge (\mathbb{E}[a^2])^2 = 1.\]

Thus

    \[\Var(\tilde{\omega}^*A\tilde{\omega}) \ge \mathbb{E}[a^4]\Var(t^* A t ) \ge \Var(t^*At) \quad \text{for every }A,\]

which proves (9).