Stochastic Trace Estimation

I am delighted to share that me, Joel A. Tropp, and Robert J. Webber‘s paper XTrace: Making the Most of Every Sample in Stochastic Trace Estimation has recently been released as a preprint on arXiv. In it, we consider the implicit trace estimation problem:

Implicit trace estimation problem: Given access to a square matrix A via the matrix–vector product operation \omega \mapsto A\omega, estimate its trace \tr A = \sum_{i=1}^n A_{ii}.

Algorithms for this task have many uses such as log-determinant computations in machine learning, partition function calculations in statistical physics, and generalized cross validation for smoothing splines. I described another application to counting triangles in a large network in a previous blog post.

Our paper presents new trace estimators XTrace and XNysTrace which are highly efficient, producing accurate trace approximations using a small budget of matrix–vector products. In addition, these algorithms are fast to run and are supported by theoretical results which explain their excellent performance. I really hope that you will check out the paper to learn more about these estimators!

For the rest of this post, I’m going to talk about the most basic stochastic trace estimation algorithm, the GirardHutchinson estimator. This seemingly simple algorithm exhibits a number of nuances and forms the backbone for more sophisticated trace estimates such as Hutch++, Nyström++, XTrace, and XNysTrace. Toward the end, this blog post will be fairly mathematical, but I hope that the beginning will be fairly accessible to all.

Girard–Hutchinson Estimator: The Basics

The GirardHutchinson estimator for the trace of a square matrix A is

(1)   \[\hat{\tr} = \frac{1}{m} \sum_{i=1}^m \omega_i^* A \omega_i. \]

Here, \omega_1,\ldots,\omega_m are random vectors, usually chosen to be statistically independent, and {}^* denotes the conjugate transpose of a vector or matrix. The Girard–Hutchinson estimator only depends on the matrix A through the matrix–vector products A\omega_1,\ldots,A\omega_m.

Unbiasedness

Provided the random vectors are isotropic

(2)   \[\mathbb{E} [\omega_i\omega_i^*] = I, \]

the Girard–Hutchinson estimator is unbiased:

(3)   \[\mathbb{E} [\hat{\tr}] = \tr A.\]

Let us confirm this claim in some detail. First, we use linearity of expectation to evaluate

(4)   \[\mathbb{E} [\hat{\tr}] = \mathbb{E} \left[ \frac{1}{m} \sum_{i=1}^m \omega_i^*A\omega_i \right] = \frac{1}{m} \sum_{i=1}^m \mathbb{E} \left[ \omega_i^* A \omega_i\right]. \]

Therefore, to prove that \mathbb{E} [\hat{\tr}] = \tr A, it is sufficient to prove that \mathbb{E} \left[\omega_i^*A\omega_i\right] = \tr A for each i.

When working with traces, there are two tricks that solve 90% of derivations. The first trick is that, if we view a number as a 1\times 1 matrix, then a number equals its trace, x = \tr x. The second trick is the cyclic property: For a k\times p matrix B and a p\times k matrix C, we have \tr (BC) = \tr (CB). The cyclic property should be handled with care when one works with a product of three or more matrices. For instance, we have

    \[\tr[BCD] = \tr[(BC)D] = \tr[D(BC)] = \tr[DBC].\]

However,

    \[\tr [BCD] \ne \tr[CBD] \quad \text{in general}.\]

One should think of the matrix product BCD as beads on a closed loop of string. One can move the last bead D to the front of the other two, \tr [BCD] = \tr[DBC], but not interchange two beads, \tr[BCD] \ne \tr[CBD].

With this trick in hand, let’s return to proving that \mathbb{E} \left[\omega_i^*A\omega_i\right] = \tr A for every i. Apply our two tricks:

    \[\mathbb{E} \left[\omega_i^*A\omega_i\right] = \mathbb{E} \tr \left[\omega_i^*A\omega_i\right] = \mathbb{E} \tr \left[A\omega_i\omega_i^*\right].\]

The expectation is a linear operation and the matrix A is non-random, so we can bring the expectation into the trace as

    \[\mathbb{E} \left[\omega_i^*A\omega_i\right] = \mathbb{E} \tr \left[A\omega_i\omega_i^*\right] = \tr(A \mathbb{E}[\omega_i\omega_i^*] ).\]

Invoke the isotropy condition (2) and conclude:

    \[\mathbb{E} \left[\omega_i^*A\omega_i\right] = \tr(A \mathbb{E}[\omega_i\omega_i^*] ) = \tr(A\cdot I) = \tr A.\]

Plugging this into (4) confirms the unbiasedness claim (3).

Variance

Continue to assume that the \omega_i‘s are isotropic (3) and now assume that \omega_1,\ldots,\omega_m are independent. By independence, the variance can be written as

    \[\Var(\hat{\tr}) = \frac{1}{m^2} \sum_{i=1}^m \Var(\omega_i^*A\omega_i).\]

Assuming that \omega_1,\ldots,\omega_m are identically distributed \omega_1,\ldots,\omega_m \sim \omega, we then get

    \[\Var(\hat{\tr}) = \frac{1}{m} \Var(\omega^*A\omega).\]

The variance decreases like 1/m, which is characteristic of Monte Carlo-type algorithms. Since \hat{\tr} is unbiased (i.e, (3)), this means that the mean square error decays like 1/m so the average error (more precisely root-mean-square error) decays like

    \[\left| \hat{\tr} - \tr A \right| \lessapprox \frac{\mathrm{const}}{\sqrt{m}}.\]

This type of convergence is very slow. If I want to decrease the error by a factor of 10, I must do 100\times the work!

Variance-reduced trace estimators like Hutch++ and our new trace estimator XTrace improve the rate of convergence substantially. Even in the worst case, Hutch++ and XTrace reduce the variance at a rate 1/m^2 and (root-mean-square) error at rates 1/m:

    \[\Var(\hat{\tr}_{\text{H++ or X}}) \le \frac{\mathrm{const}}{m^2},\quad \left| \hat{\tr}_{\text{H++ or X}} - \tr A \right| \lessapprox \frac{\mathrm{const}}{m}.\]

For matrices with rapidly decreasing singular values, the variance and error can decrease much faster than this.

Variance Formulas

As the rate of convergence for the Girard–Hutchinson estimator is so slow, it is imperative to pick a distribution on test vectors \omega that makes the variance of the single–sample estimate \omega^*A\omega as low as possible. In this section, we will provide several explicit formulas for the variance of the Girard–Hutchinson estimator. Derivations of these formulas will appear at the end of this post. These variance formulas help illuminate the benefits and drawbacks of different test vector distributions.

To express the formulas, we will need some notation. For a complex number z = a + bi we use \Re(z) = a and \Im(z) = b to denote the real and imaginary parts. The variance of a random complex number z is

    \[\Var(z) := \mathbb{E} |z - \mathbb{E} z|^2 = \Var(\Re z) + \Var(\Im z).\]

The Frobenius norm of a matrix A is

    \[\left\|A\right\|_{\rm F}^2 = \sum_{i,j} |A_{ij}|^2.\]

If A is real symmetric or complex Hermitian with (real) eigenvalues \lambda_1,\ldots,\lambda_n, we have

(5)   \[\left\|A\right\|_{\rm F}^2 = \sum_{i=1}^n \lambda_i^2. \]

A^\top denotes the ordinary transpose of A and A^* denotes the conjugate transpose of A.

Real-Valued Test Vectors

We first focus on real-valued test vectors \omega. Since \omega is real, we can use the ordinary transpose {}^\top rather than the conjugate transpose {}^*. Since \omega^\top A\omega is a number, it is equal to its own transpose:

    \[\omega^\top A \omega = (\omega^\top A \omega)^\top = \omega^\top A^\top \omega.\]

Therefore,

    \[\omega^\top A\omega = \frac{\omega^\top A \omega + \omega^\top A^\top \omega}{2} = \omega^\top \left( \frac{A + A^\top}{2} \right)\omega.\]

The Girard–Hutchinson trace estimator applied to A is the same as the Girard–Hutchinson estimator applied to the symmetric part of A, (A+A^\top)/2.

For the following results, assume A is symmetric, A = A^\top.

  1. Real Gaussian: \omega_1,\ldots,\omega_m are independent standard normal random vectors.

        \[\Var(\omega^\top A\omega) = 2 \left\|A\right\|_{\rm F}^2.\]

  2. Uniform signs (Rademachers): \omega_1,\ldots,\omega_m are independent random vectors with uniform \pm 1 coordinates.

        \[\Var(\omega^\top A \omega) = 2\sum_{i\ne j} |A_{ij}|^2.\]

  3. Real sphere: Assume \omega_1,\ldots,\omega_n are uniformly distributed on the real sphere of radius \sqrt{n}: \omega \sim \text{Uniform} \{x\in \mathbb{R}^n : x^\top x = n\}.

        \[\Var(\omega^\top A\omega) = \frac{2n}{n+2} \left( \left\|A\right\|_{\rm F}^2 - \frac{1}{n} |\tr A|^2 \right).\]

These formulas continue to hold for nonsymmetric A by replacing A by its symmetric part (A+A^\top)/2 on the right-hand sides of these variance formulas.

Complex-Valued Test Vectors

We now move our focus to complex-valued test vectors \omega. As a rule of thumb, one should typically expect that the variance for complex-valued test vectors applied to a real symmetric matrix A is about half the natural real counterpart—e.g., for complex Gaussians, you get about half the variance than with real Gaussians.

A square complex matrix has a Cartesian decomposition

    \[A = A^{\rm H} + i A^{\rm SH}\]

where

    \[A^{\rm H} = \frac{A+A^*}{2} ,\quad A^{\rm SH} = \frac{A - A^*}{2i}\]

denote the Hermitian and skew-Hermitian parts of A. Similar to how the imaginary part of a complex number is real, the skew-Hermitian part of a complex matrix is Hermitian (and i A^{\rm SH} is skew-Hermitian). Since A^{\rm H} and A^{\rm SH} are both Hermitian, we have

    \[\Re(\omega^* A\omega) = \omega^* A^{\rm H} \omega, \quad \Im (\omega^* A \omega) = \omega^* A^{\rm SH} \omega.\]

Consequently, the variance of \omega^*A \omega can be broken into Hermitian and skew-Hermitian parts:

    \[\Var(\omega^* A\omega) = \Var(\omega^* A^{\rm H}\omega) + \Var(\omega^* A^{\rm SH}\omega).\]

For this reason, we will state the variance formulas only for Hermitian A, with the formula for general A following from the Cartesian decomposition.

For the following results, assume A is Hermitian, A = A^*.

  1. Complex Gaussian: \omega_1,\ldots,\omega_m are independent standard complex random vectors, i.e., each \omega_i has iid entries distributed as (g_1+ig_2)/\sqrt{2} for g_1,g_2 standard normal random variables.

        \[\Var(\omega^* A\omega) = \left\|A\right\|_{\rm F}^2.\]

  2. Uniform phases (Steinhauses): \omega_1,\ldots,\omega_m are independent random vectors whose entries are uniform on the complex unit circle \{ z \in \complex : |z| \}.

        \[\Var(\omega^* A \omega) = \sum_{i\ne j} |A_{ij}|^2.\]

  3. Complex sphere: Assume \omega_1,\ldots,\omega_n are uniformly distributed on the complex sphere of radius \sqrt{n}: \omega \sim \text{Uniform} \{x\in \complex^n : x^* x = n\}.

        \[\Var(\omega^* A\omega) = \frac{n}{n+1} \left( \left\|A\right\|_{\rm F}^2 - \frac{1}{n} |\tr A|^2 \right).\]

Optimality Properties

Let us finally address the question of what the best choice of test vectors is for the Girard–Hutchinson estimator. We will state two results with different restrictions on \omega_1,\ldots,\omega_m.

Our first result, due to Hutchinson, is valid for real symmetric matrices with real test vectors.

Optimality (independent test vectors with independent coordinates). If the test vectors \omega_1,\ldots,\omega_m \in \mathbb{R}^n are isotropic (2), independent from each other, and have independent entries, then for any fixed real symmetric matrix A, the minimum variance for \hat{\tr} is obtained when \omega_1,\ldots,\omega_m are populated with random signs (\omega_i)_j \sim \textnormal{Uniform} \{\pm 1\}.

The next optimality results will have real and complex versions. To present the results for \mathbb{R}-valued and an \complex-valued test vectors on unified footing, let \field denote either \mathbb{R} or \complex. We let a \field-Hermitian matrix be either a real symmetric matrix (if \field = \mathbb{R}) or a complex Hermitian matrix (if \field = \complex). Let a \field-unitary matrix be either a real orthogonal matrix (if \field = \mathbb{R}) or a complex unitary matrix (if \field = \complex).

The condition that the vectors \omega_1,\ldots,\omega_m have independent entries is often too restrictive in practice. It rules out, for instance, the case of uniform vectors on the sphere. If we relax this condition, we get a different optimal distribution:

Optimality (independent test vectors). Consider any set \mathscr{A} of \field-Hermitian matrices which is invariant under \field-unitary similary transformations:

    \[\text{If $A \in \mathscr{A}$ and $U$ is $\field$-unitary, then $U^*AU \in \mathscr{A}$.}\]

Assume that the test vectors \omega_1,\ldots,\omega_m are independent and isotropic (2). The worst-case variance \sup_{A \in \mathscr{A}} \Var(\hat{\tr}) is minimized by choosing \omega_1,\ldots,\omega_m uniformly on the \field-sphere: \omega_1,\ldots,\omega_m \sim \text{Uniform} \{ x \in \field^n : x^*x =n \}.

More simply, if you wants your stochastic trace estimator to be effective for a class of inputs \mathscr{A} (closed under \field-unitary similarity transformations) rather than a single input matrix A, then the best distribution are test vectors drawn uniformly from the sphere. Examples of classes of matrices \mathscr{A} include:

  • Fixed eigenvalues. For fixed real eigenvalues \lambda_1,\ldots,\lambda_n \in \mathbb{R}, the set of all \field-Hermitian matrices with these eigenvalues.
  • Density matrices. The class of all trace-one psd matrices.
  • Frobenius norm ball. The class of all \field-Hermitian matrices of Frobenius norm at most 1.

Derivation of Formulas

In this section, we provide derivations of the variance formulas. I have chosen to focus on derivations which are shorter but use more advanced techniques rather than derivations which are longer but use fewer tricks.

Real Gaussians

First assume A is real. Since A is real symmetric, A has an eigenvalue decomposition A = Q\Lambda Q^\top, where Q is orthogonal and \Lambda is a diagonal matrix reporting A‘s eigenvalues. Since the real Gaussian distribution is invariant under orthogonal transformations, \omega^\top A\omega = (Q^\top \omega)^\top \Lambda (Q^\top\omega) has the same distribution as \omega^\top \Lambda \omega. Therefore,

    \[\Var(\omega^\top A \omega) = \Var(\omega^\top \Lambda \omega) = \Var \left( \sum_{i=1}^n \lambda_i \omega_i^2 \right) = \sum_{i=1}^n \lambda_i^2 \Var(\omega_i^2) = 2\sum_{i=1}^n \lambda_i^2 = 2\left\|A\right\|_{\rm F}^2.\]

Here, we used that the variance of a squared standard normal random variable is two.

For A non-real matrix, we can break the matrix A into its entrywise real and imaginary parts A = \mathfrak{R}(A) + i \, \mathfrak{I}(A). Thus,

    \[\Var(\omega^\top A \omega) = \Var(\omega^\top \mathfrak{R}(A) \omega) + \Var(\omega^\top \mathfrak{I}(A) \omega) = 2\left\|\mathfrak{R}(A)\right\|_{\rm F}^2 + 2\left\|\mathfrak{I}(A)\right\|_{\rm F}^2 = 2\left\|A\right\|_{\rm F}^2.\]

Uniform Signs

First, compute

    \[\omega^\top A \omega - \mathbb{E}[\omega^\top A \omega] = \sum_{i,j=1}^n A_{ij} \omega_i\omega_j - \sum_{i=1}^n A_{ii} = \sum_{i\ne j} A_{ij} \omega_i\omega_j + \sum_{i=1}^n A_{ii}(\omega_i^2-1).\]

For a vector \omega of uniform random signs, we have \omega_i^2 = 1 for every i, so the second sum vanishes. Note that we have assumed A symmetric, so the sum over i\ne j can be replaced by two times the sum over i < j:

    \[\omega^\top A \omega - \mathbb{E}[\omega^\top A \omega] = 2\sum_{i< j} A_{ij} \omega_i\omega_j.\]

Note that \{ \omega_i \omega_j : i < j\} are pairwise independent. As a simple exercise, one can verify that the identity

    \[\Var(a_1 X_1+\cdots+a_kX_k) = |a_1|^2 \Var(X_1) + \cdots + |a_k|^2 \Var(X_k)\]

holds for any pairwise independent family of random variances X_1,\ldots,X_k and numbers a_1,\ldots,a_k. Ergo,

    \begin{align*}\Var(\omega^\top A\omega) &= \Var(\omega^\top A \omega - \mathbb{E}[\omega^\top A \omega]) \\&= \Var\left(\sum_{i< j} 2A_{ij} \omega_i\omega_j\right) \\&= \sum_{i<j} 4 |A_{ij}|^2 \Var(\omega_i\omega_j) \\&= \sum_{i<j} 4 |A_{ij}|^2 \\&= 2 \sum_{i\ne j} |A_{ij}|^2.\end{align*}

In the second-to-last line, we use the fact that \omega_i\omega_j is a uniform random sign, which has variance 1. The final line is a consequence of the symmetry of A.

Uniform on the Real Sphere

The simplest proof is I know is by the “camel principle”. Here’s the story (a lightly edited quotation from MathOverflow):

A father left 17 camels to his three sons and, according to the will, the eldest son was to be given a half of the camels, the middle son one-third, and the youngest son the one-ninth. The sons did not know what to do since 17 is not evenly divisible into either two, three, or nine parts, but a wise man helped the sons: he added his own camel, the oldest son took 18/2=9 camels, the second son took 18/3=6 camels, the third son 18/9=2 camels and the wise man took his own camel and went away.

We are interested in a vector \omega which is uniform on the sphere of radius \sqrt{n}. Performing averages on the sphere is hard, so we add a camel to the problem by “upgrading” \omega to a spherically symmetric vector g which has a random length. We want to pick a distribution for which the computation \Var(g^\top A g) is easy. Fortunately, we already know such a distribution, the Gaussian distribution, for which we already calculated \Var(g^\top A g) = 2\left\|A\right\|_{\rm F}^2.

The Gaussian vector g and the uniform vector \omega on the sphere are related by

    \[g = \sqrt{\frac{a}{n}} \omega,\]

where a is the squared length of the Gaussian vector g. In particular, a has the distribution of the sum of n squared Gaussian random variables, which is known as a \chi^2 random variable with n degrees of freedom.

Now, we take the camel back. Compute the variance of g^\top A g using the chain rule for variance:

    \[\Var(g^\top A g) = \mathbb{E}[\Var(g^\top A g \mid a)] + \Var(\mathbb{E}[g^\top A g \mid a]).\]

Here, \Var(\cdot \mid a) and \mathbb{E}[ \cdot \mid a] denote the conditional variance and conditional expectation with respect to the random variable a. The quick and dirty ways of working with these are to treat the random variable a “like a constant” with respect to the conditional variance and expectation.

Plugging in the formula g = \sqrt{a/n} \cdot \omega and treating a “like a constant”, we obtain

    \begin{align*}\Var(g^\top A g) &= \mathbb{E}[\Var(a/n \cdot \omega^\top A \omega \mid a)] + \Var(\mathbb{E}[a/n \cdot \omega^\top A \omega \mid a]) \\&=\mathbb{E}[(a/n)^2\Var(\omega^\top A \omega)] + \Var(a/n \cdot \mathbb{E}[\omega^\top A \omega]) \\&= \frac{1}{n^2} \mathbb{E}[a^2] \cdot \Var(\omega^\top A \omega) + \frac{1}{n^2} \Var(a) |\mathbb{E} [\omega^\top A \omega]|^2.\end{align*}

As we mentioned, a is a \chi^2 random variable with n degrees of freedom and \mathbb{E}[a^2] and \Var(a) are known quantities that can be looked up:

    \[\mathbb{E}[a^2] = n(n+2), \quad \Var(a) = 2n.\]

We know \Var(g^\top A g) = 2\left\|A\right\|_{\rm F}^2 and \mathbb{E} [\omega^\top A \omega] = \tr A. Plugging these all in, we get

    \[2\left\|A\right\|_{\rm F}^2 = \frac{n+2}{n} \Var(\omega^\top A\omega) + \frac{2}{n} |\tr A|^2.\]

Rearranging, we obtain

    \[\Var(\omega^\top A\omega) = \frac{2n}{n+2} \left( \left\|A\right\|_{\rm F}^2 - \frac{1}{n}|\tr A|^2\right).\]

Complex Gaussians

The trick is the same as for real Gaussians. By invariance of complex Gaussian random vectors under unitary transformations, we can reduce to the case where A is a diagonal matrix populated with eigenvalues \lambda_1,\ldots,\lambda_n. Then

    \[\Var(\omega^*A \omega) = \Var \left( \sum_{i=1}^n \lambda_i |\omega_i|^2 \right) = \sum_{i=1}^n \Var(|\omega_i|^2) \lambda_i^2 = \sum_{i=1}^n \lambda_i^2 = \left\|A\right\|_{\rm F}^2.\]

Here, we use the fact that 2|\omega_i|^2 is a \chi^2 random variable with two degrees of freedom, which has variance four.

Random Phases

The trick is the same as for uniform signs. A short calculation (remembering that A is Hermitian and thus \overline{A_{ij}} = A_{ji}) reveals that

    \[\Var\left( \omega^* A \omega \right) = \Var \left( \sum_{i<j} 2 \Re(A_{ij} \overline{\omega_i} \omega_j) \right).\]

The random variables \{\overline{\omega_i} \omega_j : i < j\} are pairwise independent so we have

    \[\Var\left( \omega^* A \omega \right) = \Var \left( \sum_{i<j} 2 \Re(A_{ij} \overline{\omega_i} \omega_j) \right) = 4\sum_{i<j} \Var \left( \Re(A_{ij} \overline{\omega_i} \omega_j) \right).\]

Since \overline{\omega}_i \omega_j is uniformly distributed on the complex unit circle, we can assume without loss of generality that A_{ij} = |A_{ij}|. Thus, letting \phi be uniform on the complex unit circle,

    \[\Var\left( \omega^* A \omega \right) = 4\sum_{i<j} \Var \left( |A_{ij}|\Re(\phi)) \right) = 4\Var\left( \Re(\phi) \right)\sum_{i<j}|A_{ij}|^2.\]

The real and imaginary parts of \phi have the same distribution so

    \[1 = \Var(\phi) = \Var(\Re \phi) + \Var(\Im \phi) = 2 \Var(\Re \phi)\]

so \Var(\Re \phi) = 1/2. Thus

    \[\Var\left( \omega^* A \omega \right) = 2 \sum_{i<j}|A_{ij}|^2 = \sum_{i\ne j} |A_{ij}|^2.\]

Uniform on the Complex Sphere: Derivation 1 by Reduction to Real Case

There are at least three simple ways of deriving this result: the camel trick, reduction to the real case, and Haar integration. Each of these techniques illustrates a trick that is useful in its own right beyond the context of trace estimation. Since we have already seen an example of the camel trick for the real sphere, I will present the other two derivations.

Let us begin with the reduction to the real case. Let \mathfrak{R}(\cdot) and \mathfrak{I}(\cdot) denote the real and imaginary parts of a vector or matrix, taken entrywise. The key insight is that if \omega is a uniform random vector on the complex sphere of radius \sqrt{n}, then

    \[\mathscr{R}(\omega) := \twobyone{\mathfrak{R}(\omega)}{\mathfrak{I}(\omega)}\in\real^{2n} \quad \text{is a uniform random vector on the real sphere of radius $\sqrt{n}$}.\]

We’ve converted the complex vector \omega into a real vector \mathscr{R}(\omega).

Now, we need to convert the complex matrix A into a real matrix \mathscr{R}(A). To do this, recall that one way of representing complex numbers is by 2\times 2 matrices:

    \[a + bi \iff \twobytwo{a}{-b}{b}{a}.\]

Using this correspondence addition and multiplication of complex numbers can be carried by addition and multiplication of the corresponding matrices.

To convert complex matrices to real matrices, we use a matrix-version of the same representation:

    \[\mathscr{R}(A) = \twobytwo{\mathfrak{R}(A)}{-\mathfrak{I}(A)}{\mathfrak{I}(A)}{\mathfrak{R}(A)}.\]

One can check that addition and multiplication of complex matrices can be carried out by addition and multiplication of the corresponding “realified” matrices, i.e.,

    \[\mathscr{R}(A + B) = \mathscr{R}(A) + \mathscr{R}(B), \quad \mathscr{R}(A\cdot B) = \mathscr{R}(A) \cdot \mathscr{R}(B)\]

holds for all complex matrices A and B.

We’ve now converted complex matrix A and vector \omega into real matrix \mathscr{R}(A) and vector \mathscr{R}(\omega). Let’s compare \omega^*A\omega to \mathscr{R}(\omega)^\top\mathscr{R}(A)\mathscr{R}(\omega). A short calculation reveals

    \[\omega^*A\omega = \mathscr{R}(\omega)^\top \mathscr{R}(A)\mathscr{R}(\omega) .\]

Since \mathscr{R}(\omega) is a uniform random vector on the sphere of radius \sqrt{n}, \sqrt{2}\cdot \mathscr{R}(\omega) is a uniform random vector on the sphere of radius \sqrt{2n}. Thus, by the variance formula for the real sphere, we get

    \[\Var(\omega^*A\omega) = \Var[(\sqrt{2}\mathscr{R}(\omega))^\top (\mathscr{R}(A)/2)(\sqrt{2}\mathscr{R}(\omega) )] = \frac{4n}{2n+2} \left[ \|\mathscr{R}(A)/2\|_{\rm F}^2 - \frac{1}{8n}(\tr\mathscr{R}(A))^2 \right].\]

A short calculation verifies that \tr \mathscr{R}(A) = 2\tr A and \norm{\mathscr{R}(A)}_{\rm F}^2 = 2\|A\|_{\rm F}^2. Plugging this in, we obtain

    \[\Var(\omega^*A\omega)= \frac{n}{n+1} \left[ \|A\|_{\rm F}^2 - \frac{1}{n}(\tr A)^2  \right].\]

Uniform on the Complex Sphere: Derivation 2 by Haar Integration

The proof by reduction to the real case requires some cumbersome calculations and requires that we have already computed the variance in the real case by some other means. The method of Haar integration is more slick, but it requires some pretty high-power machinery. Haar integration may be a little bit overkill for this problem, but this technique is worth learning as it can handle some truly nasty expected value computations that appear, for example, in quantum information.

We seek to compute

    \[\mathbb{E} [(\omega^*A \omega)^2].\]

The first trick will be to write this expession using a single matrix trace using the tensor (Kronecker) product \otimes. For those unfamiliar with the tensor product, the main properties we will be using are

(6)   \[(A\otimes B) (C\otimes D) = (AB) \otimes (CD), \quad \tr(A\otimes B) = \tr A \cdot \tr B. \]

We saw in the proof of unbiasedness that

    \[\omega^* A \omega = \tr (\omega^*A\omega) = \tr (A \omega\omega^*).\]

Therefore, by (6),

    \[(\omega^*A\omega)^2 = (\tr [A \omega\omega^*])^2 = \tr [A\omega\omega^* \otimes A\omega\omega^*] = \tr [(A\otimes A) (\omega\omega^* \otimes \omega\omega^*)].\]

Thus, to evaluate \mathbb{E}[(\omega^*A\omega)^2], it will be sufficient to evaluate \mathbb{E}[\omega\omega^* \otimes \omega\omega^*]. Forunately, there is a useful formula for these expectation provided by a field of mathematics known as representation theory (see Lemma 1 in this paper):

    \[\mathbb{E}[ \omega\omega^* \otimes \omega\omega^*] = \frac{2n}{n+1} \operatorname{Proj}_{\operatorname{Sym}^2(\complex^n)}.\]

Here, \operatorname{Proj}_{\operatorname{Sym}^2(\complex^n)} is the orthogonal projection onto the space of symmetric two-tensors \operatorname{Sym}^2(\complex^n) = \operatorname{span} \{ v \otimes v : v \in \complex^n \}. Therefore, we have that

    \[\mathbb{E}[(\omega^*A\omega)^2] = \tr [(A\otimes A) \mathbb{E}(\omega\omega^* \otimes \omega\omega^*)] = \frac{2n}{n+1} \tr [(A\otimes A) \operatorname{Proj}_{\operatorname{Sym}^2(\complex^n)}].\]

To evalute the trace on the right-hand side of this equation, there is another formula (see Lemma 6 in this paper):

    \[\tr \left[(A\otimes B) \operatorname{Proj}_{\operatorname{Sym}^2(\complex^n)}\right] = \frac{1}{2} \left( \tr(AB) + \tr A \cdot \tr B \right).\]

Therefore, we conclude

    \begin{align*}\Var(\omega^* A \omega) &= \mathbb{E}[(\omega^*A\omega)^2] - (\mathbb{E}[\omega^*A\omega])^2 \\&= \frac{2n}{n+1}\tr [(A\otimes A) \operatorname{Proj}_{\operatorname{Sym}^2(\complex^n)}] - (\tr A)^2 \\&= \frac{n}{n+1}\left[ \tr A^2 + (\tr A)^2 \right] - (\tr A)^2 \\&= \frac{n}{n+1}\left[ \left\|A\right\|_{\rm F}^2 - \frac{1}{n} (\tr A)^2 \right].\end{align*}

Proof of Optimality Properties

In this section, we provide proofs of the two optimality properties.

Optimality: Independent Vectors with Independent Coordinates

Assume A is real and symmetric and suppose that \omega is isotropic (2) with independent coordinates. The isotropy condition

    \[\mathbb{E}[\omega\omega^\top] = I\]

implies that \mathbb{E}[\omega_i\omega_j] = \delta_{ij}, where \delta is the Kronecker symbol. Using this fact, we compute the second moment:

    \begin{align*}\mathbb{E}[ (\omega^*A \omega)^2] &= \mathbb{E}\left[ \left( \sum_{i=1}^n A_{ii} \omega_i^2 +2 \sum_{i<j} A_{ij}\omega_i\omega_j) \right)^2\right] \\&= \sum_{i=1}^n A_{ii}^2 \mathbb{E}[\omega_i^4] + \sum_{i<j} (2A_{ii}A_{jj}+4A_{ij}^2) \mathbb{E}[\omega_i^2]\mathbb{E}[\omega_j^2] \\&= \sum_{i=1}^n A_{ii}^2 \mathbb{E}[\omega_i^4] + \sum_{i<j} (2A_{ii}A_{jj}+4A_{ij}^2) .\end{align*}

Thus

    \[\Var(\omega^*A\omega) = \mathbb{E}[ (\omega^*A \omega)^2] - (\mathbb{E}[\omega^* A \omega])^2 = \sum_{i=1}^n A_{ii}^2 (\mathbb{E}[|\omega_i|^4]-1) + 4\sum_{i<j} A_{ij}^2.\]

The variance is minimized by choosing \omega with \mathbb{E} \omega_i^4 as small as possible. Since \mathbb{E} \omega_i^2 = 1, the smallest possible value for \mathbb{E} \omega_i^4 is \mathbb{E} \omega_i^4 = 1, which is obtained by populating \omega with random signs.

Optimality: Independent Vectors

This result appears to have first been proven by Richard Kueng in unpublished work. We use an argument suggested to me by Robert J. Webber.

Assume \mathscr{A} is a class of \field-Hermitian matrices closed under \field-unitary similarity transformations and that \omega is an isotropic random vector (2). Decompose the test vector as

    \[\omega = a \cdot s \quad \text{for} \quad a \in [0,+\infty), \: s \in\{x\in \field^n : x^*x = n \}.\]

First, we shall show that the variance is reduced by replacing s with a vector t drawn uniformly from the sphere

(7)   \[\sup_{A\in\mathscr{A}} \Var(\tilde{\omega}^*A\tilde{\omega}) \le \sup_{A\in\mathscr{A}} \Var(\omega^*A\omega \]

where

(8)   \[\tilde{\omega} = a\cdot t \quad \text{and}\quad t\sim \text{Uniform} \{ x \in \field^n :x^*x = n \} \quad \text{is independent of $a$}. \]

Note that such a t can be generated as t = Qs for a uniformly random \field-unitary matrix Q. Therefore, we have

    \begin{align*}\sup_{A\in\mathscr{A}} \Var(\tilde{\omega}^*A\tilde{\omega})&= \sup_{A\in\mathscr{A}} \left[\mathbb{E}[(\tilde{\omega}^*A\tilde{\omega})^2] - (\tr A)^2\right]\\&= \sup_{A\in\mathscr{A}} \left[\mathbb{E}[a^2 \cdot s^*(Q^*AQ)s] - (\tr (Q^*AQ))^2\right].\end{align*}

Now apply Jensen’s inequality only over the randomness in Q to obtain

    \begin{align*}\sup_{A\in\mathscr{A}} \Var(\tilde{\omega}^*A\tilde{\omega})&= \sup_{A\in\mathscr{A}} \left[\mathbb{E}[a^2 \cdot s^*(Q^*AQ)s] - (\tr (Q^*AQ))^2\right] \\&\le \mathbb{E}_Q \sup_{A\in\mathscr{A}} \left[\mathbb{E}_{a,s}[a^2 \cdot s^*(Q^*AQ)s] - (\tr (Q^*AQ))^2\right].\end{align*}

Finally, note that since \mathscr{A} is closed under \field-unitary similarity transformations, the supremum over Q^*AQ for A \in \mathscr{A} is the same as the supremum of A \in \mathscr{A}, so we obtain

    \begin{align*}\sup_{A\in\mathscr{A}} \Var(\tilde{\omega}^*A\tilde{\omega})&\le \mathbb{E}_Q \sup_{A\in\mathscr{A}} \left[\mathbb{E}_{a,s}[a^2 \cdot s^*(Q^*AQ)s] - (\tr (Q^*AQ))^2\right] \\&= \mathbb{E}_Q \sup_{A\in\mathscr{A}} \left[\mathbb{E}_{a,s}[a^2 \cdot s^*As] - (\tr A)^2\right] \\&= \sup_{A\in\mathscr{A}} \Var(\omega^*A\omega).\end{align*}

We have successfully proven (7). This argument is a specialized version of a far more general result which appears as Proposition 4.1 in this paper.

Next, we shall prove

(9)   \[\sup_{A\in\mathscr{A}} \Var(t^*At) \le \sup_{A\in\mathscr{A}} \Var(\tilde{\omega}^*A\tilde{\omega}), \]

where t is still defined as in (8). Indeed, using the chain rule for variance, we obtain

    \begin{align*}\Var(\tilde{\omega}^*A\tilde{\omega})&= \Var(a^2\cdot t^*At) \\&= \mathbb{E}[\Var(a^2\cdot t^* A t \mid a)] + \Var(\mathbb{E}[a^2\cdot t^* A t \mid a]) \\&= \mathbb{E}[a^4]\Var(t^* A t )+ (\tr A)^2\Var(a^2) \\&\ge \mathbb{E}[a^4]\Var(t^* A t ).\end{align*}

Here, we have used that t is uniform on the sphere and thus \mathbb{E}[t^*At] = \tr A. By definition, a is the length of \omega divided by \sqrt{n}. Therefore,

    \[\mathbb{E}[a^2] = \frac{1}{n}\mathbb{E}[\omega^*\omega] = \frac{1}{n} \mathbb{E}[\tr (\omega\omega^*)] = \frac{1}{n} \tr (\mathbb{E}[\omega\omega^*]) = \frac{\tr I}{n} = 1.\]

Therefore, by Jensen’s inequality,

    \[\mathbb{E}[a^4] = \mathbb{E}[(a^2)^2] \ge (\mathbb{E}[a^2])^2 = 1.\]

Thus

    \[\Var(\tilde{\omega}^*A\tilde{\omega}) \ge \mathbb{E}[a^4]\Var(t^* A t ) \ge \Var(t^*At) \quad \text{for every }A,\]

which proves (9).

Low-Rank Approximation Toolbox: Nyström, Cholesky, and Schur

In the last post, we looked at the Nyström approximation to an N\times N positive semidefinite (psd) matrix A. A special case was the column Nyström approximation, defined to be1We use Matlab index notation to indicate submatrices of A.

(Nys)   \[\hat{A} = A(:,S) \, A(S,S)^{-1} \, A(S,:), \]

where S = \{s_1,\ldots,s_k\} \subseteq \{1,2,\ldots,N\} identifies a subset of k columns of A. Provided k\ll N, this allowed us to approximate all N^2 entries of the matrix A using only the kN entries in columns s_1,\ldots,s_k of A, a huge savings of computational effort!

With the column Nyström approximation presented just as such, many questions remain:

  • Why this formula?
  • Where did it come from?
  • How do we pick the columns s_1,\ldots,s_k?
  • What is the residual A - \hat{A} of the approximation?

In this post, we will answer all of these questions by drawing a connection between low-rank approximation by Nyström approximation and solving linear systems of equations by Gaussian elimination. The connection between these two seemingly unrelated areas of matrix computations will pay dividends, leading to effective algorithms to compute Nyström approximations by the (partial) Cholesky factorization of a positive (semi)definite matrix and an elegant description of the residual of the Nyström approximation as the Schur complement.

Cholesky: Solving Linear Systems

Suppose we want solve the system of linear equations Ax = b, where A is a real N\times N invertible matrix and b is a vector of length N. The standard way of doing this in modern practice (at least for non-huge matrices A) is by means of Gaussian elimination/LU factorization. We factor the matrix A as a product A = LU of a lower triangular matrix L and an upper triangular matrix U.2To make this accurate, we usually have to reorder the rows of the matrix A as well. Thus, we actually compute a factorization PA = LU where P is a permutation matrix and L and U are triangular. The system Ax = b is solved by first solving Ly = b for y and then Ux = y for x; the triangularity of L and U make solving the associated systems of linear equations easy.

For real symmetric positive definite matrix A, a simplification is possible. In this case, one can compute an LU factorization where the matrices L and U are transposes of each other, U = L^\top. This factorization A = LL^\top is known as a Cholesky factorization of the matrix A.

The Cholesky factorization can be easily generalized to allow the matrix A to be complex-valued. For a complex-valued positive definite matrix A, its Cholesky decomposition takes the form A = LL^*, where L is again a lower triangular matrix. All that has changed is that the transpose {}^\top has been replaced by the conjugate transpose {}^*. We shall work with the more general complex case going forward, though the reader is free to imagine all matrices as real and interpret the operation {}^* as ordinary transposition if they so choose.

Schur: Computing the Cholesky Factorization

Here’s one way of computing the Cholesky factorization using recursion. Write the matrix A in block form as

    \[A = \twobytwo{A_{11}}{A_{12}}{A_{21}}{A_{22}}. \]

Our first step will be “block Cholesky factorize” the matrix A, factoring A as a product of matrices which are only block triangular. Then, we’ll “upgrade” this block factorization into a full Cholesky factorization.

The core idea of Gaussian elimination is to combine rows of a matrix to introduce zero entries. For our case, observe that multiplying the first block row of A by A_{21}A_{11}^{-1} and subtracting this from the second block row introduces a matrix of zeros into the bottom left block of A. (The matrix A_{11} is a principal submatrix of A and is therefore guaranteed to be positive definite and thus invertible.3To directly see A_{11} is positive definite, for instance, observe that since A is positive definite, x^* A_{11}x = \twobyone{x}{0}^* A\twobyone{x}{0} > 0 for every nonzero vector x.) In matrix language,

    \[\twobytwo{I}{0}{-A_{21}A_{11}^{-1}}{I}\twobytwo{A_{11}}{A_{12}}{A_{21}}{A_{22}} = \twobytwo{A_{11}}{A_{12}}{0}{A_{22} - A_{21}A_{11}^{-1}A_{12}}.\]

Isolating A on the left-hand side of this equation by multiplying by

    \[\twobytwo{I}{0}{-A_{21}A_{11}^{-1}}{I}^{-1} = \twobytwo{I}{0}{A_{21}A_{11}^{-1}}{I}\]

yields the block triangular factorization

    \[A = \twobytwo{A_{11}}{A_{12}}{A_{21}}{A_{22}} = \twobytwo{I}{0}{A_{21}A_{11}^{-1}}{I} \twobytwo{A_{11}}{A_{12}}{0}{A_{22} - A_{21}A_{11}^{-1}A_{12}}.\]

We’ve factored A into block triangular pieces, but these pieces are not (conjugate) transposes of each other. Thus, to make this equation more symmetrical, we can further decompose

(1)   \[A = \twobytwo{A_{11}}{A_{12}}{A_{21}}{A_{22}} = \twobytwo{I}{0}{A_{21}A_{11}^{-1}}{I} \twobytwo{A_{11}}{0}{0}{A_{22} - A_{21}A_{11}^{-1}A_{12}} \twobytwo{I}{0}{A_{21}A_{11}^{-1}}{I}^*. \]

This is a block version of the Cholesky decomposition of the matrix A taking the form A = LDL^*, where L is a block lower triangular matrix and D is a block diagonal matrix.

We’ve met the second of our main characters, the Schur complement

(Sch)   \[S = A_{22} - A_{21}A_{11}^{-1}A_{12}. \]

This seemingly innocuous combination of matrices has a tendency to show up in surprising places when one works with matrices.4See my post on the Schur complement for more examples. It’s appearance in any one place is unremarkable, but the shear ubiquity of A_{22} - A_{21}A_{11}^{-1}A_{12} in matrix theory makes it deserving of its special name, the Schur complement. To us for now, the Schur complement is just the matrix appearing in the bottom right corner of our block Cholesky factorization.

The Schur complement enjoys the following property:5This property is a consequence of equation (1) together with the conjugation rule for positive (semi)definiteness, which I discussed in this previous post.

Positivity of the Schur complement: If A=\twobytwo{A_{11}}{A_{12}}{A_{21}}{A_{22}} is positive (semi)definite, then the Schur complement S = A_{22} - A_{21}A_{11}^{-1}A_{12} is positive (semi)definite.

As a consequence of this property, we conclude that both A_{11} and S are positive definite.

With the positive definiteness of the Schur complement in hand, we now return to our Cholesky factorization algorithm. Continue by recursively6As always with recursion, one needs to specify the base case. For us, the base case is just that Cholesky decomposition of a 1\times 1 matrix A is A = A^{1/2} \cdot A^{1/2}. computing Cholesky factorizations of the diagonal blocks

    \[A_{11} = L_{11}^{\vphantom{*}}L_{11}^*, \quad S = L_{22}^{\vphantom{*}}L_{22}^*.\]

Inserting these into the block LDL^* factorization (1) and simplifying gives a Cholesky factorization, as desired:

    \[A = \twobytwo{L_{11}}{0}{A_{21}^{\vphantom{*}}(L_{11}^{*})^{-1}}{L_{22}}\twobytwo{L_{11}}{0}{A_{21}^{\vphantom{*}}(L_{11}^{*})^{-1}}{L_{22}}^* =: LL^*.\]

Voilà, we have obtained a Cholesky factorization of a positive definite matrix A!

By unwinding the recursions (and always choosing the top left block A_{11} to be of size 1\times 1), our recursive Cholesky algorithm becomes the following iterative algorithm: Initialize L to be the zero matrix. For j = 1,2,3,\ldots,N, perform the following steps:

  1. Update L. Set the jth column of L:

        \[L(j:N,j) \leftarrow A(j:N,j)/\sqrt{a_{jj}}.\]

  2. Update A. Update the bottom right portion of A to be the Schur complement:

        \[A(j+1:N,j+1:N)\leftarrow A(j+1:N,j+1:N) - \frac{A(j+1:N,j)A(j,j+1:N)}{a_{jj}}.\]

This iterative algorithm is how Cholesky factorization is typically presented in textbooks.

Nyström: Using Cholesky Factorization for Low-Rank Approximation

Our motivating interest in studying the Cholesky factorization was the solution of linear systems of equations Ax = b for a positive definite matrix A. We can also use the Cholesky factorization for a very different task, low-rank approximation.

Let’s first look at things through the lense of the recursive form of the Cholesky factorization. The first step of the factorization was to form the block Cholesky factorization

    \[A = \twobytwo{I}{0}{A_{21}A_{11}^{-1}}{I} \twobytwo{A_{11}}{0}{0}{A_{22} - A_{21}A_{11}^{-1}A_{12}} \twobytwo{I}{0}{A_{21}A_{11}^{-1}}{I}^*.\]

Suppose that we choose the top left A_{11} block to be of size k\times k, where k is much smaller than N. The most expensive part of the Cholesky factorization will be the recursive factorization of the Schur complement A_{22} - A_{21}A_{11}^{-1} A_{12}, which is a large matrix of size (N-k)\times (N-k).

To reduce computational cost, we ask the provocative question: What if we simply didn’t factorize the Schur complement? Observe that we can write the block Cholesky factorization as a sum of two terms

(2)   \[A = \twobyone{I}{A_{21}{A_{11}^{-1}}} A_{11} \twobyone{I}{A_{22}{A_{11}^{-1}}}^* + \twobytwo{0}{0}{0}{A_{22}-A_{21}A_{11}^{-1}A_{12}}. \]

We can use the first term of this sum as a rank-k approximation to the matrix A. The low-rank approximation, which we can write out more conveniently as

    \[\hat{A} = \twobyone{A_{11}}{A_{21}} A_{11}^{-1} \onebytwo{A_{11}}{A_{12}} = \twobytwo{A_{11}}{A_{12}}{A_{21}}{A_{21}A_{11}^{-1}A_{12}},\]

is the column Nyström approximation (Nys) to A associated with the index set S = \{1,2,3,\ldots,k\} and is the final of our three titular characters. The residual of the Nyström approximation is the second term in (2), which is none other than the Schur complement (Sch), padded by rows and columns of zeros:

    \[A - \hat{A} = \twobytwo{0}{0}{0}{A_{22}-A_{21}A_{11}^{-1}A_{12}}.\]

Observe that the approximation \hat{A} is obtained from the process of terminating a Cholesky factorization midway through algorithm execution, so we say that the Nyström approximation results from a partial Cholesky factorization of the matrix A.

Summing things up:

If we perform a partial Cholesky factorization on a positive (semi)definite matrix, we obtain a low-rank approximation known as the column Nyström approximation. The residual of this approximation is the Schur complement, padded by rows and columns of zeros.

The idea of obtaining a low-rank approximation from a partial matrix factorization is very common in matrix computations. Indeed, the optimal low-rank approximation to a real symmetric matrix is obtained by truncating its eigenvalue decomposition and the optimal low-rank approximation to a general matrix is obtained by truncating its singular value decomposition. While the column Nyström approximation is not the optimal rank-k approximation to A (though it does satisfy a weaker notion of optimality, as discussed in this previous post), it has a killer feature not possessed by the optimal approximation:

The column Nyström approximation is formed from only k columns from the matrix A. A column Nyström approximation approximates an N\times N matrix by only reading a fraction of its entries!

Unfortunately there’s not a free lunch here. The column Nyström is only a good low-rank approximation if the Schur complement has small entries. In general, this need not be the case. Fortunately, we can improve our situation by means of pivoting.

Our iterative Cholesky algorithm first performs elimination using the entry in position (1,1) followed by position (2,2) and so on. There’s no need to insist on this exact ordering of elimination steps. Indeed, at each step of the Cholesky algorithm, we can choose whichever diagonal position (j,j) that we want to perform elimination. The entry we choose to perform elimination with is called a pivot.

Obtaining good column Nyström approximations requires identifying the a good choice for the pivots to reduce the size of the entries of the Schur complement at each step of the algorithm. With general pivot selection, an iterative algorithm for computing a column Nyström approximation by partial Cholesky factorization proceeds as follows: Initialize an N\times k matrix F to store the column Nyström approximation \hat{A} = FF^*, in factored form. For j = 1,2,\ldots,k, perform the following steps:

  1. Select pivot. Choose a pivot s_j.
  2. Update the approximation. F(:,j) \leftarrow A(:,s_j) / \sqrt{a_{s_js_j}}.
  3. Update the residual. A \leftarrow A - \frac{A(:,s_j)A(s_j,:)}{a_{s_js_j}}.

This procedure results in the Nyström approximation (Nys) with column set S = \{s_1,\ldots,s_k\}:

    \[\hat{A} = FF^* = A(:,S) \, A(S,S)^{-1} \, A(S,:).\]

The pivoted Cholesky steps 1–3 requires updating the entire matrix A at every step. With a little more cleverness, we can optimize this procedure to only update the entries of the matrix A we need to form the approximation \hat{A} = FF^*. See Algorithm 2 in this preprint of my coauthors and I for details.

How should we choose the pivots? Two simple strategies immediately suggest themselves:

  • Uniformly random. At each step j, select s_j uniformly at random from among the un-selected pivot indices.
  • Greedy.7The greedy pivoting selection is sometimes known as diagonal pivoting or complete pivoting in the numerical analysis literature. At each step j, select s_j according to the largest diagonal entry of the current residual A:

        \[s_j = \argmax_{1\le k\le N} a_{kk}.\]

The greedy strategy often (but not always) works well, and the uniformly random approach can work surprisingly well if the matrix A is “incoherent”, with all rows and columns of the matrix possessing “similar importance”.

Despite often working fairly well, both the uniform and greedy schemes can fail significantly, producing very low-quality approximations. My research (joint with Yifan Chen, Joel A. Tropp, and Robert J. Webber) has investigated a third strategy striking a balance between these two approaches:

  • Diagonally weighted random. At each step j, select s_j at random according to the probability weights based on the current diagonal of the matrix

        \[\mathbb{P} \{ s_j = k \} = \frac{a_{kk}}{\operatorname{tr} A}.\]

Our paper provides theoretical analysis and empirical evidence showing that this diagonally-weighted random pivot selection (which we call randomly pivoted Cholesky aka RPCholesky) performs well at approximating all positive semidefinite matrices A, even those for which uniform random and greedy pivot selection fail. The success of this approach can be seen in the following examples (from Figure 1 in the paper), which shows RPCholesky can produce much smaller errors than the greedy and uniform methods.

Conclusions

In this post, we’ve seen that a column Nyström approximation can be obtained from a partial Cholesky decomposition. The residual of the approximation is the Schur complement. I hope you agree that this is a very nice connection between these three ideas. But beyond its mathematical beauty, why do we care about this connection? Here are my reasons for caring:

  • Analysis. Cholesky factorization and the Schur complement are very well-studied in matrix theory. We can use known facts about Cholesky factorization and Schur complements to prove things about the Nyström approximation, as we did when we invoked the positivity of the Schur complement.
  • Algorithms. Cholesky factorization-based algorithms like randomly pivoted Cholesky are effective in practice at producing high-quality column Nyström approximations.

On a broader level, our tale of Nyström, Cholesky, and Schur demonstrates that there are rich connections between low-rank approximation and (partial versions of) classical matrix factorizations like LU (with partial pivoting), Cholesky, QR, eigendecomposition, and SVD for to full-rank matrices. These connections can be vital to analyzing low-rank approximation algorithms and developing improvements.

Low-Rank Approximation Toolbox: Nyström Approximation

Welcome to a new series for this blog, Low-Rank Approximation Toolbox. As I discussed in a previous post, many matrices we encounter in applications are well-approximated by a matrix with a small rank. Efficiently computing low-rank approximations has been a major area of research, with applications in everything from classical problems in computational physics and signal processing to trendy topics like data science. In this series, I want to explore some broadly useful algorithms and theoretical techniques in the field of low-rank approximation.

I want to begin this series by talking about one of the fundamental types of low-rank approximation, the Nyström approximation of a N\times N (real symmetric or complex Hermitian) positive semidefinite (psd) matrix A. Given an arbitrary N\times k “test matrix” \Omega, the Nyström approximation is defined to be

(1)   \[A\langle \Omega\rangle := A\Omega \, (\Omega^*A\Omega)^{-1} \, \Omega^*A. \]

This formula is sensible whenever \Omega^*A\Omega is invertible; if \Omega^*A\Omega is not invertible, then the inverse {}^{-1} should be replaced by the Moore–Penrose pseudoinverse {}^\dagger. For simplicity, I will assume that \Omega^* A \Omega is invertible in this post, though everything we discuss will continue to work if this assumption is dropped. I use {}^* to denote the conjugate transpose of a matrix, which agrees with the ordinary transpose {}^\top for real matrices. I will use the word self-adjoint to refer to a matrix which satisfies A=A^*.

The Nyström approximation (1) answers the question

What is the “best” rank-k approximation to the psd matrx A provided only with the matrix–matrix product A\Omega, where \Omega is a known N\times k matrix (k\ll N)?

Indeed, if we let Y = A\Omega, we observe that the Nyström approximation can be written entirely using Y and \Omega:

    \[A\langle \Omega\rangle = Y \, (\Omega^* Y)^{-1}\, Y^*.\]

This is central advantage of the Nyström approximation: to compute it, the only access to the matrix A I need is the ability to multiply the matrices A and \Omega. In particular, I only need a single pass over the entries of A to compute the Nyström approximation. This allows the Nyström approximation to be used in settings when other low-rank approximations wouldn’t work, such as when A is streamed to me as a sum of matrices that must be processed as they arrive and then discarded.

Choosing the Test Matrix

Every choice of N\times k test matrix \Omega defines a rank-k Nyström approximation1Actually, A\langle \Omega\rangle is only rank at most k. For this post, we will use rank-k to mean “rank at most k“. A\langle \Omega\rangle by (1). Unfortunately, the Nyström approximation won’t be a good low-rank approximation for every choice of \Omega. For an example of what can go wrong, if we pick \Omega to have columns selected from the eigenvectors of A with small eigenvalues, the approximation A\langle \Omega\rangle will be quite poor.

The very best choice of \Omega would be the k eigenvectors associated with the largest eigenvalues. Unfortunately, computing the eigenvectors to high accuracy is computationally costly. Fortunately, we can get decent low-rank approximations out of much simpler \Omega‘s:

  1. Random: Perhaps surprisingly, we get a fairly low-rank approximation out of just choosing \Omega to be a random matrix, say, populated with statistically independent standard normal random entries. Intuitively, a random matrix is likely to have columns with meaningful overlap with the large-eigenvalue eigenvectors of A (and indeed with any k fixed orthonormal vectors). One can also pick more exotic kinds of random matrices, which can have computational benefits.
  2. Random then improve: The more similar the test matrix \Omega is to the large-eigenvalue eigenvectors of A, the better the low-rank approximation will be. Therefore, it makes sense to use the power method (usually called subspace iteration in this context) to improve a random initial test matrix \Omega_{\rm init} to be closer to the eigenvectors of A.2Even better than subspace iteration is block Krylov iteration. See section 11.6 of the following survey for details.
  3. Column selection: If \Omega consists of columns i_1,i_2,\ldots,i_k of the identity matrix, then A\Omega just consists of columns i_1,\ldots,i_k of A: In MATLAB notation,

        \[A(:,\{i_1,\ldots,i_k\}) = A\Omega \quad \text{for}\quad \Omega = I(:,\{i_1,i_2,\ldots,i_k\}).\]

    This is highly appealing as it allows us to approximate the matrix A by only reading a small fraction of its entries (provided k\ll N)! Producing a good low-rank approximation requires selecting the right column indices i_1,\ldots,i_k (usually under the constraint of reading a small number of entries from A). In my research with Yifan Chen, Joel A. Tropp, and Robert J. Webber, I’ve argued that the most well-rounded algorithm for this task is a randomly pivoted partial Cholesky decomposition.

The Projection Formula

Now that we’ve discussed the choice of test matrix, we shall explore the quality of the Nyström approximation as measured by the size of the residual A - A\langle \Omega\rangle. As a first step, we shall show that the residual is psd. This means that A\langle \Omega\rangle is an underapproximation to A.

The positive semidefiniteness of the residual follows from the following projection formula for the Nyström approximation:

    \[A\langle \Omega \rangle = A^{1/2} P_{A^{1/2}\Omega} A^{1/2}.\]

Here, P_{A^{1/2}\Omega} denotes the the orthogonal projection onto the column space of the matrix A^{1/2}\Omega. To deduce the projection formula, we break down A as A = A^{1/2}\cdot A^{1/2} in (1):

    \[A\langle \Omega\rangle = A^{1/2} \left( A^{1/2}\Omega \left[ (A^{1/2}\Omega)^* A^{1/2}\Omega \right]^{-1} (A^{1/2}\Omega)^* \right) A^{1/2}.\]

The fact that the paranthesized quantity is P_{A^{1/2}\Omega} can be verified in a variety of ways, such as by QR factorization.3Let A^{1/2} \Omega = QR, where Q has orthonormal columns and R is square and upper triangular. The orthogonal projection is P_{A^{1/2}\Omega} = QQ^*. The parenthesized expression is (QR)(R^*Q^*QR)^{-1}R^*Q^* = QRR^{-1}R^{-*}R^*Q^* = QQ^* = P_{A^{1/2}\Omega}.

With the projection formula in hand, we easily obtain the following expression for the residual:

    \[A - A\langle \Omega\rangle = A^{1/2} (I - P_{A^{1/2}\Omega}) A^{1/2}.\]

To show that this residual is psd, we make use of the conjugation rule.

Conjugation rule: For a matrix B and a self-adjoint matrix H, if H is psd then B^*HB is psd. If B is invertible, then the converse holds: if B^*HB is psd, then H is psd.

The matrix I - P_{A^{1/2}\Omega} is an orthogonal projection and therefore psd. Thus, by the conjugation rule, the residual of the is Nyström approximation is psd:

    \[A - A\langle \Omega\rangle = \left(A^{1/2}\right)^* (I-P_{A^{1/2}\Omega})A^{1/2} \quad \text{is psd}.\]

Optimality of the Nyström Approximation

There’s a question we’ve been putting off that can’t be deferred any longer:

Is the Nyström approximation actually a good low-rank approximation?

As we discussed earlier, the answer to this question depends on the test matrix \Omega. Different choices for \Omega give different approximation errors. See the following papers for Nyström approximation error bounds with different choices of \Omega. While the Nyström approximation can be better or worse depending on the choice of \Omega, what is true about Nyström approximation for every choice of \Omega is that:

The Nyström approximation A\langle \Omega\rangle is the best possible rank-k approximation to A given the information A\Omega.

In precise terms, I mean the following:

Theorem: Out of all self-adjoint matrices \hat{A} spanned by the columns of A\Omega with a psd residual A - \hat{A}, the Nyström approximation has the smallest error as measured by either the spectral or Frobenius norm (or indeed any unitarily invariant norm, see below).

Let’s break this statement down a bit. This result states that the Nyström approximation is the best approximation \hat{A} to A under three conditions:

  1. \hat{A} is self-adjoint.
  2. \hat{A} is spanned by the columns of A\Omega.

I find these first two requirements to be natural. Since A is self-adjoint, it makes sense to require our approximation \hat{A} to be as well. The stipulation that \hat{A} is spanned by the columns A\Omega seems like a very natural requirement given we want to think of approximations which only use the information A\Omega. Additionally, requirement 2 ensures that \hat{A} has rank at most k, so we are really only considering low-rank approximations to A.

The last requirement is less natural:

  1. The residual A - \hat{A} is psd.

This is not an obvious requirement to impose on our approximation. Indeed, it was a nontrivial calculation using the projection formula to show that the Nyström approximation itself satisfies this requirement! Nevertheless, this third stipulation is required to make the theorem true. The Nyström approximation (1) is the best “underapproximation” to the matrix A to in the span of A\Omega.

Intermezzo: Unitarily Invariant Norms and the Psd Order

To prove our theorem about the optimality of the Nyström approximation, we shall need two ideas from matrix theory: unitarily invariant norms and the psd order. We shall briefly describe each in turn.

A norm \left\|\cdot\right\|_{\rm UI} defined on the set of N\times N matrices is said to be unitarily invariant if the norm of a matrix does not change upon left- or right-multiplication by a unitary matrix:

    \[\left\|UBV\right\|_{\rm UI} = \left\|B\right\|_{\rm UI} \quad \text{for all unitary matrices $U$ and $V$.}\]

Recall that a unitary matrix U (called a real orthogonal matrix if U is real-valued) is one that obeys U^*U = UU^* = I. Unitary matrices preserve the Euclidean lengths of vectors, which makes the class of unitarily invariant norms highly natural. Important examples include the spectral, Frobenius, and nuclear matrix norms:

The unitarily invariant norm of a matrix B depends entirely on its singular values \sigma(B). For instance, the spectral, Frobenius, and nuclear norms take the forms

    \begin{align*}\left\|B\right\|_{\rm op} &= \sigma_1(B),& &\text{(spectral)} \\\left\|B\right\|_{\rm F} &= \sqrt{\sum_{j=1}^N (\sigma_j(B))^2},& &\text{(Frobenius)} \\\left\|B\right\|_{*} &=\sum_{j=1}^N \sigma_j(B)).& &\text{(nuclear)}\end{align*}

In addition to being entirely determined by the singular values, unitarily invariant norms are non-decreasing functions of the singular values: If the jth singular value of B is larger than the jth singular value of C for 1\le j\le N, then \left\|B\right\|_{\rm UI}\ge \left\|C\right\|_{\rm UI} for every unitarily invariant norm \left\|\cdot\right\|_{\rm UI}. For more on unitarily invariant norms, see this short and information-packed blog post from Nick Higham.

Our second ingredient is the psd order (also known as the Loewner order). A self-adjoint matrix A is larger than a self-adjoint matrix H according to the psd order, written A\succeq H, if the difference A-H is psd. As a consequence, A\succeq 0 if and only if A is psd, where 0 here denotes the zero matrix of the same size as A. Using the psd order, the positive semidefiniteness of the Nyström residual can be written as A - A\langle \Omega\rangle \succeq 0.

If A and H are both psd matrices and A is larger than H in the psd order, A\succeq H\succeq 0, it seems natural to expect that A is larger than H in norm. Indeed, this intuitive statement is true, at least when one restricts oneself to unitarily invariant norms.

Psd order and norms. If A\succeq H\succeq 0, then \left\|A\right\|_{\rm UI} \ge \left\|H\right\|_{\rm UI} for every unitarily invariant norm \left\|\cdot\right\|_{\rm UI}.

This fact is a consequence of the following observations:

  • If A\succeq H, then the eigenvalues of A are larger than H in the sense that the jth largest eigenvalue of A is larger than the jth largest eigenvalue of H.
  • The singular values of a psd matrix are its eigenvalues.
  • Unitarily invariant norms are non-decreasing functions of the singular values.

Optimality of the Nyström Approximation: Proof

In this section, we’ll prove our theorem showing the Nyström approximation is the best low-rank approximation satisfying properties 1, 2, and 3. To this end, let \hat{A} be any matrix satisfying properties 1, 2, and 3. Because of properties 1 (self-adjointness) and 2 (spanned by columns of A\Omega), \hat{A} can be written in the form

    \[\hat{A} = A\Omega \, T \, (A\Omega)^* = A \Omega \, T \, \Omega^*A,\]

where T is a self-adjoint matrix. To make this more similar to the projection formula, we can factor A^{1/2} on both sides to obtain

    \[\hat{A} = A^{1/2} (A^{1/2}\Omega\, T\, \Omega^*A^{1/2}) A^{1/2}.\]

To make this more comparable to the projection formula, we can reparametrize by introducing a matrix Q with orthonormal columns with the same column space as A^{1/2}\Omega. Under this parametrization, \hat{A} takes the form

    \[\hat{A} = A^{1/2} \,QMQ^*\, A^{1/2} \quad \text{where} \quad M\text{ is self-adjoint}.\]

The residual of this approximation is

(2)   \[A - \hat{A} = A^{1/2} (I - QMQ^*)A^{1/2}. \]

We now make use of the of conjugation rule again. To simplify things, we make the assumption that A is invertible. (As an exercise, see if you can adapt this argument to the case when this assumption doesn’t hold!) Since A - \hat{A}\succeq 0 is psd (property 3), the conjugation rule tells us that

    \[I - QMQ^*\succeq 0.\]

What does this observation tell us about M? We can apply the conjugation rule again to conclude

    \[Q^*(I - QMQ^*)Q = Q^*Q - (Q^*Q)M(Q^*Q) = I-M \succeq 0.\]

(Notice that Q^*Q = I since Q has orthonormal columns.)

We are now in a place to show that A - \hat{A}\succeq A - A\langle \Omega\rangle. Indeed,

    \begin{align*}A - \hat{A} - (A-A\langle \Omega\rangle) &= A\langle\Omega\rangle - \hat{A} \\&= A^{1/2}\underbrace{QQ^*}_{=P_{A^{1/2}\Omega}}A^{1/2} - A^{1/2}QMQ^*A^{1/2} \\&=A^{1/2}Q(I-M)Q^*A^{1/2}\\&\succeq 0.\end{align*}

The second line is the projection formula together with the observation that P_{A^{1/2\Omega}} = QQ^* and the last line is the conjugation rule combined with the fact that I-M is psd. Thus, having shown

    \[A - \hat{A} \succeq A - A\langle\Omega\rangle \succeq 0,\]

we conclude

    \[\|A - \hat{A}\|_{\rm UI} \ge \left\|A - A\langle \Omega\rangle\right\|_{\rm UI} \quad \text{for every unitarily invariant norm $\left\|\cdot\right\|_{\rm UI}$}.\]

Note to Self: Hanson–Wright Inequality

This post is part of a new series for this blog, Note to Self, where I collect together some notes about an idea related to my research. This content may be much more technical than most of the content of this blog and of much less wide interest. My hope in sharing this is that someone will find this interesting and useful for their own work.


This post is about a fundamental tool of high-dimensional probability, the Hanson–Wright inequality. The Hanson–Wright inequality is a concentration inequality for quadratic forms of random vectors—that is, expressions of the form x^\top A x where x is a random vector. Many statements of this inequality in the literature have an unspecified constant c > 0; our goal in this post will be to derive a fairly general version of the inequality with only explicit constants.

The core object of the Hanson–Wright inequality is a subgaussian random variable. A random variable Y is subgaussian if the probability it exceeds a threshold t in magnitude decays as

(1)   \[\mathbb{P}\{|Y|\ge t\} \le \mathrm{e}^{-t^2/a} \quad \text{for some $a>0$ and for all sufficiently large $t$.} \]

The name subgaussian is appropriate as the tail probabilities of Gaussian random variables exhibit the same square-exponential decrease \mathrm{e}^{-t^2/a}.

A (non-obvious) fact is that if Y is subgaussian in the sense (1) and centered (\mathbb{E} Y = 0), then Y‘s cumulant generating function (cgf)

    \[\xi_Y(t) := \log \mathbb{E} \exp(tY).\]

is subquadratic: There is a constant c > 0 (independent of Y and a), for which

(2)   \[\xi_Y(t) \le ca t^2 \quad \text{for all $t\in\mathbb{R}$}. \]

Moreover,1See Proposition 2.5.2 of Vershynin’s High-Dimensional Probability. a subquadratic cgf (2) also implies the subgaussian tail property (1), with a different parameter a > 0.

Since properties (1) and (2) are equivalent (up to a change in the parameter a), we are free to fix a version of property (2) as our definition for a (centered) subgaussian random variable.

Definition (subgaussian random variable): A centered random variable X is said to be v-subgaussian or subgaussian with variance proxy v if its cgf is subquadratic:

(3)   \[\xi_{x}(t) \le\frac{1}{2} vt^2 \quad \text{for all $t\in\mathbb{R}$.} \]

For instance, a mean-zero Gaussian random variable X with variance \sigma^2 has cgf

(4)   \[ \xi_X(t) = \frac{1}{2} \sigma^2 t^2,  \]

and is thus subgaussian with variance proxy v = \sigma^2 equal to its variance.

Here is a statement of the Hanson–Wright inequality as it typically appears with unspecified constants (see Theorem 6.2.1 of Vershynin’s High-Dimensional Probability):

Theorem (Hanson–Wright): Let x be a random vector with independent centered v-subgaussian entries and let A be a square matrix. Then

    \[\mathbb{P}\left\{\left|x^\top Ax - \mathbb{E} \left[x^\top A x\right]\right|\ge t \right\} \le 2\exp\left(- \frac{c\cdot t^2}{v^2\left\|A\right\|_{\rm F}^2 + v\left\|A\right\|t} \right),\]

where c>0 is a constant (not depending on v, x, t, or A).2Here, \|\cdot\|_{\rm F} and \|\cdot\| denote the Frobenius and spectral norms.

This type of concentration is exactly the same type as provided by Bernstein’s inequality (which I discussed in my post on concentration inequalities). In particular, for small deviations t, the tail probabilities decay are subgaussian with variance proxy \approx v^2\left\|A\right\|_{\rm F}^2:

    \[\mathbb{P}\left\{\left|x^\top Ax - \mathbb{E}\left[x^\top Ax\right]\right|\ge t \right\} \stackrel{\text{small $t$}}{\lessapprox} 2\exp\left(- \frac{c\cdot t^2}{v^2\left\|A\right\|_{\rm F}^2} \right)\]

For large deviations t, this switches to subexponential tail probabilities with decay rate \approx v\|A\|:

    \[\mathbb{P}\left\{\left|x^\top Ax - \mathbb{E}\left[x^\top Ax\right]\right|\ge t \right\} \stackrel{\text{large $t$}}{\lessapprox} 2\exp\left(- \frac{c\cdot t}{v\|A\|} \right).\]

Mediating these two parameter regimes are the size of the matrix A, as measured by its Frobenius and spectral norms, and the degree of subgaussianity of x, measured by the variance proxy v.

Diagonal-Free Hanson–Wright

Now we come to a first version of the Hanson–Wright inequality with explicit constants, first for a matrix which is diagonal-free—that is, having all zeros on the diagonal. I obtained this version of the inequality myself, though I am very sure that this version of the inequality or an improvement thereof appears somewhere in the literature.

Theorem (Hanson–Wright, explicit constants, diagonal-free): Let x random vector with independent centered v-subguassian entries and let A be a diagonal-free square matrix. Then we have the cgf bound

    \[\xi_{x^\top Ax}(t) \le \frac{16v^2\left\|A\right\|_{\rm F}^2\, t^2}{2(1-4v\left\|A\right\|t)}.\]

As a consequence, we have the concentration bound

    \[\mathbb{P} \{ x^\top A x \ge t \} \le \exp\left( -\frac{t^2/2}{16v^2 \left\|A\right\|_{\rm F}^2+4v\left\|A\right\|t} \right).\]

Similarly, we have the lower tail

    \[\mathbb{P} \{ x^\top A x \le -t \} \le \exp\left( -\frac{t^2/2}{16v^2 \left\|A\right\|_{\rm F}^2+4v\left\|A\right\|t} \right)\]

and the two-sided bound

    \[\mathbb{P} \{ |x^\top A x| \ge t \} \le 2\exp\left( -\frac{t^2/2}{16v^2 \left\|A\right\|_{\rm F}^2+4v\left\|A\right\|t} \right).\]

Let us begin proving this result. Our proof will follow the same steps as Vershynin’s proof in High-Dimensional Probability (which in turn is adapted from an article by Rudelson and Vershynin), but taking care to get explicit constants. Unfortunately, proving all of the relevant tools from first principles would easily triple the length of this post, so I make frequent use of results from the literature.

We begin by the decoupling bound (Theorem 6.1.1 in Vershynin’s High-Dimensional Probability), which allows us to replace one x with an independent copy \tilde{x} at the cost of a factor of four:

(5)   \[\xi_{x^\top Ax}(t) \le \xi_{\tilde{x}^\top Ax}(4t). \]

We seek to compare the bilinear form \tilde{x}^\top Ax to the Gaussian bilinear form \tilde{g}^\top Ag where \tilde{g} and g are independent standard Gaussian vectors. We begin with the following cgf bound for the Gaussian quadratic form g^\top Ag:

    \[\xi_{g^\top Ag}(t) \le \frac{\left\|A\right\|_{\rm F}^2 \, t^2}{1-2\|A\|\, t}.\]

This equation is the result of Example 2.12 in Boucheron, Lugosi, and Massart’s Concentration Inequalities. By applying this result to the Hermitian dilation of A in A‘s place, one obtains a similar result for the decoupled bilinear form \tilde{g}^\top Ag:

(6)   \[\xi_{\tilde{g}^\top Ag}(t) \le \frac{\left\|A\right\|_{\rm F}^2 \, t^2}{2(1-\|A\|\, t)}. \]

We now seek to compare \xi_{\tilde{x}^\top Ax}(t) to \xi_{\tilde{g}^\top Ag}(t). To do this, we first evaluate the cgf of \tilde{x}^\top Ax only over the randomness in \tilde{x}. Since we’re only taking an expectation over the random variable \tilde{x}, we can apply the subquadratic tail condition (3) to obtain

(7)   \[\log \mathbb{E}_{\tilde{x}} \exp(t \, \tilde{x}^\top Ax) = \sum_{i=1}^n \log \mathbb{E}_{\tilde{x}} \exp(t \,\tilde{x}_i (Ax)_i) \le  \frac{1}{2} v \left(\sum_{i=1}^n (Ax)_i^2\right)t^2 \le \frac{1}{2} v\left\|Ax\right\|^2 \, t^2. \]

Now we perform a similar computation for the quantity \tilde{g}^\top Ax in which \tilde{x} has been replaced by the Gaussian vector \tilde{g}:

    \[\log \mathbb{E}_{\tilde{g}} \exp((\sqrt{v} t) \, \tilde{g}^\top Ax) = \frac{1}{2} v \left\|Ax\right\|^2 \, t^2.\]

We stress that this is an equality since the cgf of a Gaussian random variable is given by (4). Thus we can substitute the left-hand side of the above display into the right-hand side of (7), yielding

(8)   \[\log \mathbb{E}_{\tilde{x}} \exp(t \, \tilde{x}^\top Ax) \le \log \mathbb{E}_{\tilde{g}} \exp((\sqrt{v} t) \, \tilde{g}^\top Ax). \]

We now perform this same trick again using the randomness in x:

(9)   \[\log \mathbb{E}_{\tilde{g},x} \exp((\sqrt{v} t) \, \tilde{g}^\top Ax) \le \log \mathbb{E}_{\tilde{g}} \exp \left(\frac{1}{2} v^2 \left\|A^\top \tilde{g}\right\|^2t^2\right) = \log \mathbb{E}_{\tilde{g},g} \exp(v t \, \tilde{g}^\top Ag). \]

Packaging up (8) and (9) gives

(10)   \[\xi_{\tilde{x}^\top Ax}(t)\le \xi_{\tilde{g}^\top Ag}(vt). \]

Combining all these results (5), (6), and (10), we obtain

    \[\xi_{x^\top Ax}(t) \le \xi_{\tilde{x}^\top Ax}(4t) \le \xi_{\tilde{g}^\top Ag}(4vt) \le \frac{16v^2\left\|A\right\|_{\rm F}^2\, t^2}{2(1-4v\left\|A\right\|t)}.\]

This cgf implies the desired probability bound on the upper tail as a consequence of the following fact (see Boucheron, Lugosi, and Massart’s Concentration Inequalities page 29 and Exercise 2.8):

Fact (Bernstein concentration from Bernstein cgf bound): Suppose that a random variable X satisfies the cgf bound \xi_X(t) \le \tfrac{vt^2}{2(1-ct)} for 0 < t < 1/c. Then

    \[\mathbb{P} \left\{ X\ge t \right\} \le \exp\left( -\frac{t^2/2}{v+ct} \right).\]

To get the bound on the lower tail, apply the result for the upper tail to the matrix -A to obtain

    \[\mathbb{P} \{ x^\top A x \le -t \} = \mathbb{P} \{ x^\top (-A) x \ge t \} \le \exp\left( -\frac{t^2/2}{16v^2 \left\|A\right\|_{\rm F}^2+4v\left\|A\right\|t} \right).\]

Finally, to obtain the two-sided bound, use a union bound over the upper and lower tails:

    \[\mathbb{P} \{ |x^\top A x| \ge t \} \le \mathbb{P} \{ x^\top A x \ge t \} + \mathbb{P} \{ x^\top A x \le -t \} \le 2\exp\left( -\frac{t^2/2}{16v^2 \left\|A\right\|_{\rm F}^2+4v\left\|A\right\|t} \right).\]

General Hanson–Wright

Now, here’s a more general result (with worse constants) which permits the matrix A to possess a diagonal.

Theorem (Hanson–Wright, explicit constants): Let x random vector with independent centered v-subguassian entries and let A be an arbitrary square matrix. Then we have the cgf bound

    \[\xi_{x^\top Ax-\mathbb{E} [x^\top A x]}(t) \le \frac{40v^2\left\|A\right\|_{\rm F}^2\, t^2}{2(1-8v\left\|A\right\|t)}.\]

As a consequence, we have the concentration bound

    \[\mathbb{P} \{ x^\top A x-\mathbb{E} [x^\top A x] \ge t \} \le \exp\left( -\frac{t^2/2}{40v^2 \left\|A\right\|_{\rm F}^2+8v\left\|A\right\|t} \right).\]

Left tail and two-sided bounds versions of this bound also hold:

    \[\mathbb{P} \{ x^\top A x-\mathbb{E} [x^\top A x] \le -t \} \le \exp\left( -\frac{t^2/2}{40v^2 \left\|A\right\|_{\rm F}^2+8v\left\|A\right\|t} \right)\]

and

    \[\mathbb{P} \{ |x^\top A x-\mathbb{E} [x^\top A x]| \ge t \} \le 2\exp\left( -\frac{t^2/2}{40v^2 \left\|A\right\|_{\rm F}^2+8v\left\|A\right\|t} \right).\]

Decompose the matrix A = D+F into its diagonal and off-diagonal portions. For any two random variables X and Y (possibly highly dependent), we can bound the cgf of their sum using the following “union bound”:

(11)   \begin{align*} \xi_{X+Y}(t) &= \log \mathbb{E} \left[\exp(tX)\exp(tY)\right] \\&\le \log \left(\left[\mathbb{E} \exp(2tX)\right]^{1/2}\left[\mathbb{E}\exp(2tY)\right]^{1/2}\right) \\&=\frac{1}{2} \xi_X(2t) + \frac{1}{2}\xi_Y(2t). \end{align*}

The two equality statements are the definition of the cumulant generating function and the inequality is Cauchy–Schwarz.

Using the “union bound”, it is sufficient to obtain bounds for the cgfs of the diagonal and off-diagonal parts x^\top D x - \mathbb{E}[x^\top Ax] and x^\top F x. We begin with the diagonal part. We compute

(12)   \begin{align*}\xi_{x^\top D x - \mathbb{E}[x^\top Ax]}(t) &= \log \mathbb{E} \exp\left(t \sum_{i=1}^n A_{ii}(x_i^2 - \mathbb{E}[x_i^2]) \right) \\ &= \sum_{i=1}^n  \log \mathbb{E} \exp\left((t A_{ii})\cdot(x_i^2 - \mathbb{E}[x_i^2]) \right). \end{align*}

For the cgf of x_i^2 - \mathbb{E}[x_i^2], we use the following bound, taken from Appendix B of the following paper:

    \[\log \mathbb{E} \exp\left(t(x_i^2 - \mathbb{E}[x_i^2]) \right) \le \frac{8v^2t^2}{1-2v|t|}.\]

Substituting this result into (12) gives

(13)   \[\xi_{x^\top D x - \mathbb{E}[x^\top Ax]}(t) \le \sum_{i=1}^n \frac{8v^2|A_{ii}|^2t^2}{1-2v|A_{ii}|t} \le \frac{8v^2\|A\|_{\rm F}^2t^2}{1-2v\|A\|t}\quad \text{for $t>0$}. \]

For the second inequality, we used the facts that \max_i |A_{ii}| \le \|A\| and \sum_i |A_{ii}|^2 \le \|A\|_{\rm F}^2.

We now look at the off-diagonal part x^\top F x. We use a version of the decoupling bound (5) where we compare x^\top F x to \tilde{x}^\top A x, where we’ve both replaced one copy of x with an independent copy and reinstated the diagonal of A (see Remark 6.1.3 in Vershynin’s High-Dimensional Probability):

    \[\xi_{x^\top F x}(t) \le \xi_{\tilde{x}^\top Ax}(4t).\]

We can now just repeat the rest of the argument for the diagonal-free Hanson–Wright inequality, yielding the same conclusion

(14)   \[ \xi_{x^\top Fx}(t) \le \frac{16v^2\left\|A\right\|_{\rm F}^2\, t^2}{2(1-4v\left\|A\right\|t)}.  \]

Combining (11), (13), and (14), we obtain

    \begin{align*}\xi_{x^\top Ax-\mathbb{E} [x^\top A x]} &\le \frac{1}{2} \xi_{x^\top D x - \mathbb{E}[x^\top Ax]}(2t) + \frac{1}{2} \xi_{x^\top Fx}(2t) \\&\le \frac{8v^2\|A\|_{\rm F}^2t^2}{2(1-4v\|A\|t)} + \frac{32v^2\left\|A\right\|_{\rm F}^2\, t^2}{2(1-8v\left|A\right|t)} \\&\le \frac{8v^2\|A\|_{\rm F}^2t^2}{2(1-4v\|A\|t)} + \frac{32v^2\left\|A\right\|_{\rm F}^2\, t^2}{2(1-8v\left\|A\right\|t)} \\&\le \frac{40v^2\left\|A\right\|_{\rm F}^2\, t^2}{2(1-8v\left\|A\right\|t)}.\end{align*}

As with above, this cgf bound implies the desired probability bound.

Chebyshev Polynomials

This post is co-written by my brother, Aidan Epperly, for the second Summer of Math Exposition (SoME2).


Let’s start with a classical problem: connect-the-dots. As we know from geometry, any two points in the plane are connected by one and only one straight line:

But what if we have more than two points? How should we connect them? One natural way is by parabola. Any three points (with distinct x coordinates) are connected by one and only one parabola ax^2+bx+c:

And we can keep extending this. Any n+1 points1The degree of the polynomial is one less than the number of points because a degree-n polynomial is described by n+1 coefficients. For instance, a degree-two parabola ax^2+bx+c has three coefficients a, b, and c. (with distinct x coordinates) are connected by a unique degree-n polynomial a_n x^n + a_{n-1}x^{n-1} + \cdots + a_1 x+a_0:

This game of connect-the-dots with polynomials is known more formally as polynomial interpolation. We can use polynomial interpolation to approximate functions. For instance, we can approximate the function \sin(x) on the interval [-\pi,\pi] to visually near-perfect accuracy by connecting the dots between seven points (-\pi,\sin(-\pi)), (-\tfrac{2}{3}\pi,\sin(-\tfrac{2}{3}\pi)),\ldots ,(\pi,\sin(\pi)):

But something very peculiar happens when we try and apply this trick to the specially chosen function R(x) = 1/(1+25x^2) on the interval [-1,1]:

Unlike \sin(x), the polynomial interpolant for R(x) is terrible! What’s going on? Why doesn’t polynomial interpolation work here? Can we fix it? The answer to the last question is yes and the solution is Chebyshev polynomials.

Reverse-Engineering Chebyshev

The failure of polynomial interpolation for R(x) is known as Runge’s phenomenon after Carl Runge who discovered this curious behavior in 1901. The function R(x) is called the Runge function. Our goal is to find a fix for polynomial interpolation which crushes the Runge phenomenon, allowing us to reliably approximate every sensible2A famous theorem of Faber states that there does not exist any set of points through which the polynomial interpolants converge for every continuous function. This is not as much of a problem as it may seem. As the famous Weierstrass function shows, arbitrary continuous functions can be very weird. If we restrict ourselves to nicer functions, such as Lipschitz continuous functions, there does exist a set of points through which the polynomial interpolant always converges to the underlying function. Thus, in this senses, it is possible to crush the Runge phenomenon. function with polynomial interpolation.

Carl Runge

Let’s put on our thinking caps and see if we can discover the fix for ourselves. In order to discover a fix, we must first identify the problem. Observe that the polynomial interpolant is fine near the center of the interval; it only fails near the boundary.

This leads us to a guess for what the problem might be; maybe we need more interpolation points near the boundaries of the interval. Indeed, tipping our hand a little bit, this turns out to be the case. For instance, connecting the dots for the following set of “mystery points” clustered at the endpoints works just fine:

Let’s experiment a little and see if we can discover a nice set of interpolation points, which we will call x_0,x_1,\ldots,x_n, like this for ourselves. We’ll assume the interpolation points are given by a function x_j = g(j/n) so we can form the polynomial interpolant for any desired polynomial degree n.3Technically, we should insist on the function g(\cdot) being \textit{injective} so that the points g(0),g(1/n),\ldots,g(1) are guaranteed to be distinct. For instance, if we pick g(t) = 2t^2-1, the points look like this:

Equally spaced points j/n (shown on vertical axis) give rise to
non-equally spaced points g(j/n) (shown on horizontal axis)

How should we pick the function g(\cdot)? First observe that, even for the Runge function, equally spaced interpolation points are fine near the center of the interval. We thus have at least two conditions for our desired n+1 interpolation points:

  1. The interior points should maintain their spacing of roughly 2/n.
  2. The points must cluster near both boundaries.

As a first attempt let’s divide the interval into thirds and halve the spacing of points except in the middle third. This leads to the function

    \begin{equation*}g(x) = \begin{cases} -1+x, & 0\le x < \tfrac{1}{3}, \\-\tfrac{2}{3} + 4\left( x-\tfrac{1}{3}\right), & \tfrac{1}{3} \le x < \tfrac{2}{3}, \\\tfrac{2}{3} + \left( x-\tfrac{2}{3}\right), & \tfrac{2}{3} \le x \le 1.\end{cases}\end{equation*}

These interpolation points initially seem promising, even successfully approximating the Runge function itself.

Unfortunately, this set of points fails when we consider other functions. For instance, if we use the Runge-like function S(x) = 1/(1+900x^2), we see that these interpolation points now lead to a failure to approximate the function at the middle of the interval, even if we use a lot of interpolation points!

Maybe the reason this set of interpolation points didn’t work is that the points are too close at the endpoints. Or maybe we should have divided the interval as quarter–half–quarter rather than thirds. There are lots of variations of this strategy for choosing points to explore and all of them eventually lead to failure on some Runge-flavored example. We need a fundamentally different strategy then making the points a times closer within distance b of the endpoints.

Let’s try a different approach. The closeness of the points at the endpoints is determined by the slope of the function g at 0 and 1. The smaller that |g'(0)| and |g'(1)| are, the more clustered the points will be. For instance,

    \begin{equation*}g'(0) = g'(1) = 2 \quad \text{for equally spaced points}.\end{equation*}

When we halved the distance between points, we instead had

    \begin{equation*}g'(0) = g'(1) = 1 \quad \text{when points at ends were twice as close together}.\end{equation*}

So if we want the points to be much more clustered together, it is natural to require

    \begin{equation*}g'(0) = g'(1) = 0. \quad \text{(new requirement)}\end{equation*}

It also makes sense for the function g(\cdot) to cluster points equally near both endpoints, since we see no reason to preference one end over the other. Collecting together all the properties we want the function g(\cdot) to have, we get the following list:

  1. g spans the whole range [-1,1],
  2. g'(0) = g'(1) = 0, and
  3. g is symmetric about 1/2, g(1/2+x) = -g(1/2-x).

Mentally scrolling through our Rolodex of friendly functions, a natural one that might come to mind meeting these three criteria is the cosine function, specifically g(t) = \cos(\pi t). This function yields points which are more clustered at the endpoints:

The points

    \begin{equation*}x_j = \cos\left(\frac{j\pi}{n}\right)\end{equation*}

we guessed our way into are known as the Chebyshev points.4Some authors refer to these as the “Chebyshev points of the second kind” or use other names. We follow the convention in Approximation Theory and Approximation Practice (Chapter 1) and simply refer to these points simply as the Chebyshev points. The Chebyshev points prove themselves perfectly fine for the Runge function:

As we saw earlier, success on the Runge function alone is not enough to declare victory for the polynomial interpolation problem. However, in this case, there are no other bad examples left to find. For any nice function with no jumps, polynomial interpolation through the Chebyshev points works excellently.5Specifically, for a function f(\cdot) which not too rough (i.e., Lipschitz continuous), the degree-n polynomial interpolant of f(\cdot) through the Chevyshev points converges uniformly to f(\cdot) as n\to\infty.

Why the Chebyshev Points?

We’ve guessed our way into a solution to the polynomial interpolation problem, but we still really don’t know what’s going on here. Why are the Chebyshev points much better at polynomial interpolation than equally spaced ones?

Now that we know that the Chebyshev points are a right answer to the interpolation problem,6Indeed, there are other sets of interpolation points through which polynomial interpolation also works well, such as the Legendre points. let’s try and reverse engineer a principled reason for why we would expect them to be effective for this problem. To do this, we ask:

What is special about the cosine function?

From high school trigonometry, we know that \cos \theta gives the x coordinate of a point \theta radians along the unit circle. This means that the Chebyshev points are the x coordinates of equally spaced points on the unit circle (specifically the top half of the unit circle 0\le \theta\le \pi).

Chebyshev points are the x coordinates of equally spaced points on the unit circle.

This raises the question:

What does the interpolating polynomial p(x) look like as a function of the angle \theta?

To convert between x and \theta we simply plug in x = \cos \theta to p(x):

    \begin{equation*}p^\circ(\theta) = p(\cos \theta) = a_n \cos^n \theta + a_{n-1} \cos^{n-1} \theta + \cdots + a_0.\end{equation*}

This new function depending on \theta, which we can call p^\circ(\theta), is a polynomial in the variable \cos \theta. Powers of cosines are not something we encounter every day, so it makes sense to try and simplify things using some trig identities. Here are the first couple powers of cosines:

    \begin{gather*}\cos^2 \theta = \frac{1}{2} + \frac{1}{2} \cos (2\theta), \\\cos^3 \theta = \frac{3}{4}\cos \theta + \frac{1}{4} \cos (3\theta), \\\cos^4 \theta = \frac{3}{8}+ \frac{1}{2} \cos (2\theta) + \frac{1}{8} \cos (4\theta),\\\vdots\end{gather*}

A pattern has appeared! The powers \cos^k \theta always take the form7As a fun exercise, you might want to try and prove this using mathematical induction.

    \begin{equation*}\cos^k \theta = \textnormal{(some number)} \cdot \cos(k\theta) + \textnormal{(some number)} \cdot \cos((k-2)\theta) + \cdots .\end{equation*}

The significance of this finding is that, by plugging in each of these formulas for \cos^k \theta, we see that our polynomial p(x) in the variable x has morphed into a Fourier cosine series in the variable \theta:

    \begin{equation*}p^\circ(\theta) = b_n \cos(n\theta) + b_{n-1} \cos((n-1)\theta) + \cdots + b_1 \cos \theta + b_0.\end{equation*}

For anyone unfamiliar with Fourier series, we highly encourage the 3Blue1Brown video on the subject, which explains why Fourier series are both mathematically beautiful and practically useful. The basic idea is that almost any function can be expressed as a combination of waves (that is, sines and cosines) with different frequencies.8More precisely, we might call these angular frequencies. In our case, this formula tells us that p^\circ(\theta) is equal to b_0 units of frequency 0, plus b_1 units of frequency 1, all the way up to b_n units of frequency n. Different types of Fourier series are appropriate in different contexts. Since our Fourier series only possesses cosines, we call it a Fourier cosine series.

We’ve discovered something incredibly cool:

Polynomial interpolation through the Chebyshev points is equivalent to finding a Fourier cosine series for equally spaced angles \theta.

We’ve arrived at an answer to why the Chebyshev points work well for polynomial interpolation.

Polynomial interpolation through the Chebyshev points is effective because Fourier cosine series through equally spaced angles \theta is effective.

Of course, this explanation just raises the further question: Why do Fourier cosine series give effective interpolants through equally spaced angles \theta? This question has a natural answer as well, involving the convergence theory and aliasing formula (see Section 3 of this paper) for Fourier series. We’ll leave the details to the interested reader for investigation. The success of Fourier cosines series in interpolating equally spaced data is a fundamental observation that underlies the field of digital signal processing. Interpolation through the Chebyshev points effectively hijacks this useful fact and applies it to the seemingly unrelated problem of polynomial interpolation.

Another question this explanation raises is the precise meaning of “effective”. Just how good are polynomial interpolants through the Chebyshev points at approximating functions? As is discussed at length in another post on this blog, the degree to which a function can be effectively approximated is tied to how smooth or rough it is. Chebyshev interpolants approximate nice analytic functions like \sin(x) or 1/(1+25x^2) with exponentially small errors in the number of interpolation points used. By contrast, functions with kinks like |x| are approximated with errors which decay much more slowly. See theorems 2 and 3 on this webpage for more details.

Chebyshev Polynomials

We’ve now discovered a set of points, the Chebyshev points, through which polynomial interpolation works well. But how should we actually compute the interpolating polynomial

    \begin{equation*}p(x) = a_n x^n + a_{n-1}x^{n-1} + \cdots + a_0?\end{equation*}

Again, it will be helpful to draw on the connection to Fourier series. Computations with Fourier series are highly accurate and can be made lightning fast using the fast Fourier transform algorithm. By comparison, directly computing with a polynomial p(x) through its coefficients a_n,a_{n-1},\ldots,a_0 is a computational nightmare.

In the variable \theta, the interpolant takes the form

    \begin{equation*}p^\circ(\theta) = b_n \cos(n\theta) + b_{n-1} \cos((n-1)\theta) + \cdots + b_1 \cos \theta + b_0.\end{equation*}

To convert back to x = \cos \theta, we use the inverse function9One always has to be careful when going from x = \cos \theta to \theta = \arccos x since multiple \theta values get mapped to a single x value by the cosine function. Fortunately, we’re working with variables 0\le \theta\le \pi and -1\le x\le 1, between which the cosine function is one-to-one with the inverse function being given by the arccosine. \theta = \arccos x to obtain:

    \begin{equation*}p(x) = b_n \cos(n\arccos(x)) + \cdots + b_1 \cos(\arccos x) + b_0\end{equation*}

This is a striking formula. Given all of the trigonometric functions, it’s not even obvious that p(x) is a polynomial (it is)!

Despite its seeming peculiarity, this is a very powerful way of representing the polynomial p(x). Rather than expressing p(x) using monomials 1,x,x^2,\ldots, we’ve instead written p(x) as a combination of more exotic polynomials

    \begin{equation*}T_k(x) = \cos(k \arccos x) \quad \text{for $k=0,1,2,\ldots n$}.\end{equation*}

The polynomials T_0(x),T_1(x),T_2(x),\ldots are known as the Chebyshev polynomials,10More precisely, the polynomials T_k(x) are known as the Chebyshev polynomials of the first kind. named after Pafnuty Chebyshev who studied the polynomials intensely.11The letter “T” is used for Chebyshev polynomials since the Russian name “Chebyshev” is often alternately transliterated to English as “Tchebychev”.

Pafnuty Chebyshev

Writing out the first few Chebyshev polynomials shows they are indeed polynomials:

    \begin{gather*}T_0(x) = 1, \\T_1(x) = x, \\T_2(x) = 2x^2 - 1, \\ T_3(x) = 4x^3 - 3x, \\\vdots \end{gather*}

The first four Chebyshev polynomials

To confirm that this pattern does continue, we can use trig identities to derive12Specifically, the recurrence is a consequence of applying the sum-to-product identity to \cos((k+1)\theta) + \cos((k-1)\theta) for \theta = \arccos x. the following recurrence relation for the Chebyshev polynomials:

    \begin{equation*}T_{k+1}(x) = 2x T_k(x) - T_{k-1}(x).\end{equation*}

Since T_0(x) = 1 and T_1(x) = x are both polynomials, every Chebyshev polynomial is as well.

We’ve arrived at the following amazing conclusion:

Under the change of variables x = \cos \theta, the Fourier cosine series

    \[p^\circ(\theta) = b_n\cos(n\theta) + \cdots + b_1\cos\theta + b_0\]

becomes the combination of Chebyshev polynomials

    \[p(x) = b_nT_n(x) + \cdots + b_1 T_1(x) + b_0.\]

This simple and powerful observations allows us to apply the incredible speed and accuracy of Fourier series to polynomial interpolation.

Beyond being a neat idea with some nice mathematics, this connection between Fourier series and Chebyshev polynomials is a powerful tool for solving computational problems. Once we’ve accurately approximated a function by a polynomial interpolant, many quantities of interest (derivatives, integrals, zeros) become easy to compute—after all, we just have to compute them for a polynomial! We can also use Chebyshev polynomials to solve differential equations with much faster rates of convergence than other methods. Because of the connection to Fourier series, all of these computations can be done to high accuracy and blazingly fast via the fast Fourier transform, as is done in the software package Chebfun.

The Chebyshev polynomials have an array of amazing properties and they appear all over mathematics and its applications in other fields. Indeed, we have only scratched the surface of the surface. Many questions remain:

  • What is the connection between the Chebyshev points and the Chebyshev polynomials?
  • The cosine functions 1,\cos \theta,\cos(2\theta),\ldots are orthogonal to each other; are the Chebyshev polynomials?
  • Are the Chebyshev points the best points for polynomial interpolation? What does “best” even mean in this context?
  • Every “nice” even periodic function has an infinite Fourier cosine series which converges to it. Is there a Chebyshev analog? Is there a relation between the infinite Chebyshev series and the (finite) interpolating polynomial through the Chebyshev points?

All of these questions have beautiful and fairly simple answers. The book Approximation Theory and Approximation Practice is a wonderfully written book that answers all of these questions in its first six chapters, which are freely available on the author’s website. We recommend the book highly to the curious reader.

TL;DR: To get an accurate polynomial approximation, interpolate through the Chebyshev points.
To compute the resulting polynomial, change variables to \theta = \arccos x, compute the Fourier cosine series interpolant, and obtain your polynomial interpolant as a combination of Chebyshev polynomials.

Big Ideas in Applied Math: Concentration Inequalities

This post is about randomized algorithms for problems in computational science and a powerful set of tools, known as concentration inequalities, which can be used to analyze why they work. I’ve discussed why randomization can help in solving computational problems in a previous post; this post continues this discussion by presenting an example of a computational problem where, somewhat surprisingly, a randomized algorithm proves effective. We shall then use concentration inequalities to analyze why this method works.

Triangle Counting

Let’s begin our discussion of concentration inequalities by means of an extended example. Consider the following question: How many triangles are there in the Facebook network? That is, how many trios of people are there who are all mutual friends? While seemingly silly at first sight, this is actually a natural and meaningful question about the structure of the Facebook social network and is related to similar questions such as “How likely are two friends of a person to also be friends with each other?”

If there are n people on the Facebook graph, then the natural algorithm of iterating over all {n \choose 3} \approx n^3/6 triplets and checking whether they form a triangle is far too computationally costly for the billions of Facebook accounts. Somehow, we want to do much faster than this, and to achieve this speed we would be willing to settle for an estimate of the triangle count up to some error.

There are many approaches to this problem, but let’s describe a particularly surprising algorithm. Let A be an n\times n matrix where the ijth entry of A is 1 if users i and j are friends and 0 otherwise1All of the diagonal entries of A are set to zero.; this matrix is called the adjacency matrix of the Facebook graph. A fact from graph theory is that the ijth entry of the cube A^3 of the matrix A counts the number of paths from user i to user j of length three.2By a path of length three, we mean a sequence of users i,k,\ell,j where i and k, k and \ell, and \ell and j are all friends. In particular, the iith entry of A^3 denotes the number of paths from i to itself of length 3, which is twice the number of triangles incident on i. (The paths i\to j \to k \to i and i\to k \to j\to i are both counted as paths of length 3 for a triangle consisting of i, j, and k.) Therefore, the trace of A^3, equal to the sum of its diagonal entries, is six times the number of triangles: The iith entry of A^3 is twice the number of triangles incident on i and each triangle (i,j,k) is counted thrice in the iith, jjth, and kkth entries of A^3. In summary, we have

    \begin{equation*} \mbox{\# triangles} = \frac{1}{6} \operatorname{tr} A^3. \end{equation*}

Therefore, the triangle counting problem is equivalent to computing the trace of A^3. Unfortunately, the problem of computing A^3 is, in general, very computationally costly. Therefore, we seek ways of estimating the trace of a matrix without forming it.

Randomized Trace Estimation

Motivated by the triangle counting problem from the previous section, we consider the problem of estimating the trace of a matrix M. We assume that we only have access to the matrix M through matrix–vector products; that is, we can efficiently compute Mx for a vector x. For instance, in the previous example, the Facebook graph has many fewer friend relations (edges) m than the maximum possible amount of {n\choose 2} \approx n^2/2. Therefore, the matrix A is sparse; in particular, matrix–vector multiplications with A can be computed in around m operations. To compute matrix–vector products Mx with M = A^3, we simply compute matrix–vector products with A three times, x \mapsto Ax \mapsto A(Ax) \mapsto A(A(Ax)) = A^3x.

Here’s a very nifty idea to estimate the trace of M using only matrix–vector products, originally due to Didier A. Girard and Michael F. Hutchinson. Choose x to be a random vector whose entries are independent \pm 1-values, where each value +1 and -1 occurs with equal 1/2 probability. Then if one forms the expression x^\top M x = \sum_{i,j=1}^n M_{ij} x_i x_j. Since the entries of x_i and x_j are independent, the expectation of x_ix_j is 0 for i\ne j and 1 for i = j. Consequently, by linearity of expectation, the expected value of x^\top M x is

    \begin{equation*} \mathbb{E} \, x^\top M x = \sum_{i,j=1}^n M_{ij} \mathbb{E} [x_ix_j] = \sum_{i = 1}^n M_{ii} = \operatorname{tr}(M). \end{equation*}

The average value of x^\top M x is equal to the trace of M! In the language of statistics, we might say that x^\top M x is an unbiased estimator for \operatorname{tr}(M). Thus, the efficiently computable quantity x^\top M x can serve as a (crude) estimate for \operatorname{tr}(M).

While the expectation of x^\top Mx equals \operatorname{tr}(M), any random realization of x^\top M x can deviate from \operatorname{tr}(M) by a non-neligible amount. Thus, to reduce the variability of the estimator x^\top M x, it is appropriate to take an average of multiple copies of this random estimate. Specifically, we draw k random vectors with independent random \pm 1 entries x_1,\ldots,x_k and compute the averaged trace estimator

(1)   \begin{equation*} T_k := \frac{1}{k} \sum_{j=1}^k x_j^\top M x_j^{\vphantom{\top}}. \end{equation*}

The k-sample trace estimator T_k remains an unbiased estimator for \operatorname{tr}(M), \mathbb{E}\, T_k = \operatorname{tr}(M), but with reduced variability. Quantitatively, the variance of T_k is k times smaller than the single-sample estimator x^\top M x:

(2)   \begin{equation*} \operatorname{Var}(T_k) = \frac{1}{k} \operatorname{Var}(x^\top M x). \end{equation*}

The Girard–Hutchinson trace estimator gives a natural way of estimating the trace of the matrix M, a task which might otherwise be hard without randomness.3To illustrate what randomness is buying us here, it might be instructive to think about how one might try to estimate the trace of M via matrix–vector products without the help of randomness. For the trace estimator to be a useful tool, an important question remains: How many samples k are needed to compute \operatorname{tr}(M) to a given accuracy? Concentration inequalities answer questions of this nature.

Concentration Inequalities

A concentration inequality provides a bound on the probability a random quantity is significantly larger or smaller than its typical value. Concentration inequalities are useful because they allow us to prove statements like “With at least 99% probability, the randomized trace estimator with 100 samples produces an approximation of the trace which is accurate up to error no larger than 0.001.” In other words, concentration inequalities can provide quantitative estimates of the likely size of the error when a randomized algorithm is executed.

In this section, we shall introduce a handful of useful concentration inequalities, which we will apply to the randomized trace estimator in the next section. We’ll then discuss how these and other concentration inequalities can be derived in the following section.

Markov’s Inequality

Markov’s inequality is the most fundamental concentration inequality. When used directly, it is a blunt instrument, requiring little insight to use and producing a crude but sometimes useful estimate. However, as we shall see later, all of the sophisticated concentration inequalities that will follow in this post can be derived from a careful use of Markov’s inequality.

The wide utility of Markov’s inequality is a consequence of the minimal assumptions needed for its use. Let X be any nonnegative random variable. Markov’s inequality states that the probability that X exceeds a level t > 0 is bounded by the expected value of X over t. In equations, we have

(3)   \begin{equation*} \mathbb{P} \left\{ X \ge t \right\} \le \frac{\mathbb{E} \, X}{t}. \end{equation*}

We stress the fact that we make no assumptions on how the random quantity X is generated other than that X is nonnegative.

As a short example of Markov’s inequality, suppose we have a randomized algorithm which takes one second on average to run. Markov’s inequality then shows that the probability the algorithm takes more than 100 seconds to run is at most 1/100 = 1\%. This small example shows both the power and the limitation of Markov’s inequality. On the negative side, our analysis suggests that we might have to wait as much as 100 times the average runtime for the algorithm to complete running with 99% probability; this large huge multiple of 100 seems quite pessimistic. On the other hand, we needed no information whatsoever about how the algorithm works to do this analysis. In general, Markov’s inequality cannot be improved without more assumptions on the random variable X.4For instance, imagine an algorithm which 99% of the time completes instantly and 1% of the time takes 100 seconds. This algorithm does have an average runtime of 1 second, but the conclusion of Markov’s inequality that the runtime of the algorithm can be as much as 100 times the average runtime with 1% probability is true.

Chebyshev’s Inequality and Averages

The variance of a random variable describes the expected size of a random variable’s deviation from its expected value. As such, we would expect that the variance should provide a bound on the probability a random variable is far from its expectation. This intuition indeed is correct and is manifested by Chebyshev’s inequality. Let X be a random variable (with finite expected value) and t > 0. Chebyshev’s inequality states that the probability that X deviates from its expected value by more than t is at most \operatorname{Var}(X)/t^2:

(4)   \begin{equation*} \mathbb{P} \left\{ \left| X - \mathbb{E} \, X \right| \ge t  \right\} \le \frac{\operatorname{Var}(X)}{t^2}. \end{equation*}

Chebyshev’s inequality is frequently applied to sums or averages of independent random quantities. Suppose X_1,\ldots,X_n are independent and identically distributed random variables with mean \mu and variance \sigma^2 and let \overline{X} denote the average

    \begin{equation*} \overline{X} = \frac{X_1 + \cdots + X_n}{n}. \end{equation*}

Since the random variables X_1,\ldots,X_n are independent,5In fact, this calculation works if X_1,\ldots,X_n are only pairwise independent or even pairwise uncorrelated. For algorithmic applications, this means that X_1,\ldots,X_n don’t have to be fully independent of each other; we just need any pair of them to be uncorrelated. This allows many randomized algorithms to be “derandomized“, reducing the amount of “true” randomness needed to execute an algorithm. the properties of variance entail that

    \begin{equation*} \operatorname{Var}(\overline{X}) = \operatorname{Var}\left( \frac{1}{n} X_1 + \cdots + \frac{1}{n} X_n \right) = \frac{1}{n^2} \operatorname{Var}(X_1) + \cdots + \frac{1}{n^2} \operatorname{Var}(X_n) = \frac{\sigma^2}{n}, \end{equation*}

where we use the fact that \operatorname{Var}(X_1) = \cdots = \operatorname{Var}(X_n) = \sigma^2. Therefore, by Chebyshev’s inequality,

(5)   \begin{equation*} \mathbb{P} \left\{ \left| \overline{X} - \mu \right| \ge t \right\} \le \frac{\sigma^2}{nt^2}. \end{equation*}

Suppose we want to estimate the mean \mu by \overline{X} up to error \epsilon and are willing to tolerate a failure probability of \delta. Then setting the right-hand side of (5) to \delta, Chebyshev’s inequality suggests that we need at most

(6)   \begin{equation*} n = \sigma^2\cdot \frac{1/\delta}{\epsilon^2} \end{equation*}

samples to achieve this goal.

Exponential Concentration: Hoeffding and Bernstein

How happy should we be with the result (6) of applying Chebyshev’s inequality the average \overline{X}? The central limit theorem suggests that \overline{X} should be approximately normally distributed with mean \mu and variance \sigma^2/n. Normal random variables have an exponentially small probability of being more than a few standard deviations above their mean, so it is natural to expect this should be true of \overline{X} as well. Specifically, we expect a bound roughly like

(7)   \begin{equation*} \mathbb{P} \left\{ \left| \overline{X} - \mu \right| \ge t \right\} \stackrel{?}{\lessapprox} \exp\left(-\frac{nt^2}{2\sigma^2}\right). \end{equation*}

Unfortunately, we don’t have a general result quite this nice without additional assumptions, but there are a diverse array of exponential concentration inequalities available which are quite useful in analyzing sums (or averages) of independent random variables that appear in applications.

Hoeffding’s inequality is one such bound. Let X_1,\ldots,X_n be independent (but not necessarily identically distributed) random variables and consider the average \overline{X} = (X_1 + \cdots + X_n)/n. Hoeffding’s inequality makes the assumption that the summands are bounded, say within an interval [a,b].6There are also more general versions of Hoeffding’s inequality where the bound on each random variable is different. Hoeffding’s inequality then states that

(8)   \begin{equation*} \mathbb{P}\left\{ \left|\overline{X} - \mathbb{E} \, \overline{X}\right| \ge t \right\} \le 2 \exp\left( -\frac{2nt^2}{(b-a)^2} \right). \end{equation*}

Hoeffding’s inequality is quite similar to the ideal concentration result (7) except with the variance \sigma^2 = n\operatorname{Var}(\overline{X}) replaced by the potentially much larger quantity7Note that \sigma^2 is always smaller than or equal to (b-a)^2/4. (b-a)^2/4.

Bernstein’s inequality fixes this deficit in Hoeffding’s inequality at a small cost. Now, instead of assuming X_1,\ldots,X_n are bounded within the interval [a,b], we make the alternate boundedness assumption |X_i - \mathbb{E}\, X_i| \le B for every 1\le i\le n. We continue to denote \sigma^2 = n\operatorname{Var}(\overline{X}) so that if X_1,\ldots,X_n are identically distributed, \sigma^2 denotes the variance of each of X_1,\ldots,X_n. Bernstein’s inequality states that

(9)   \begin{equation*} \mathbb{P}\left\{ \left|\overline{X} - \mathbb{E} \, \overline{X}\right| \ge t \right\} \le 2 \exp\left( -\frac{nt^2/2}{\sigma^2 + Bt/3} \right). \end{equation*}

For small values of t, Bernstein’s inequality yields exactly the kind of concentration that we would hope for from our central limit theorem heuristic (7). However, for large values of t, we have

    \begin{equation*} \mathbb{P}\left\{ \left|\overline{X} - \mathbb{E} \, \overline{X}\right| \ge t \right\} \stackrel{\mbox{large $t$}}{\lessapprox} 2 \exp\left( -\frac{3nt}{2B} \right), \end{equation*}

which is exponentially small in t rather than t^2. We conclude that Bernstein’s inequality provides sharper bounds then Hoeffding’s inequality for smaller values of t but weaker bounds for larger values of t.

Chebyshev vs. Hoeffding vs. Bernstein

Let’s return to the situation where we seek to estimate the mean \mu of independent and identically distributed random variables X_1,\ldots,X_n each with variance \sigma^2 by using the averaged value \overline{X} = (X_1 + \cdots + X_n)/n. Our goal is to bound how many samples n we need to estimate \mu up to error \epsilon, | \overline{X} - \mu | \le \epsilon, except with failure probability at most \delta. Using Chebyshev’s inequality, we showed that (see (7))

    \begin{equation*} n \ge \sigma^2\cdot \frac{1/\delta}{\epsilon^2} \mbox{ suffices}. \end{equation*}

Now, let’s try using Hoeffding’s inequality. Suppose that X_1,\ldots,X_n are bounded in the interval [a,b]. Then Hoeffding’s inequality (8) shows that

    \begin{equation*} n \ge \frac{(b-a)^2}{4}\cdot \frac{2\log(2/\delta)}{\epsilon^2} \mbox{ suffices}. \end{equation*}

Bernstein’s inequality states that if X_1,\ldots,X_n lie in the interval [\mu-B,\mu+B] for every 1\le i \le n, then

(10)   \begin{equation*} n \ge \sigma^2 \cdot \frac{2\log(2/\delta)}{\epsilon^2} + B\cdot \frac{2/3\cdot\log(2/\delta)}{\epsilon} \mbox{ suffices}. \end{equation*}

Hoeffding’s and Bernstein’s inequality show that we need n roughly proportional to \tfrac{\log(1/\delta)}{\epsilon^2} samples are needed rather than proportional to \tfrac{1/\delta}{\epsilon^2}. The fact that we need proportional to 1/\epsilon^2 samples to achieve error \epsilon is a consequence of the central limit theorem and is something we would not be able to improve with any concentration inequality. What exponential concentration inequalities allow us to do is to improve the dependence on the failure probability from proportional to 1/\delta to \log(1/\delta), which is a huge improvement.

Hoeffding’s and Bernstein’s inequalities both have a small drawback. For Hoeffding’s inequality, the constant of proportionality is (b-a)^2/4 rather than the true variance \sigma^2 of the summands. Bernstein’s inequality gives us the “correct” constant of proportionality \sigma^2 but adds a second term proportional to \tfrac{\log(1/\delta)}{\epsilon}; for small values of \epsilon, this term is dominated by the term proportional to \tfrac{\log(1/\delta)}{\epsilon^2} but the second term can be relevant for larger values of \epsilon.

There are a panoply of additional concentration inequalities than the few we’ve mentioned. We give a selected overview in the following optional section.

Other Concentration Inequalities
There are a handful more exponential concentration inequalities for sums of independent random variables such as Chernoff’s inequality (very useful for somes of bounded, positive random variables) and Bennett’s inequality. There are also generalizations of Hoeffding’s, Chernoff’s, and Bernstein’s inequalities for unbounded random variables with subgaussian and subexponential tail decay; these results are documented in Chapter 2 of Roman Vershynin’s excellent book High-Dimensional Probability.

One can also generalize concentration inequalities to so-called martingale sequences, which can be very useful for analyzing adaptive algorithms. These inequalities can often have the advantage of bounding the probability that a martingale sequence ever deviates by some amount from its applications; these results are called maximal inequalities. Maximal analogs of Markov’s and Chebyshev’s inequalities are given by Ville’s inequality and Doob’s inequality. Exponential concentration inequalities include the Hoeffding–Azuma inequality and Freedman’s inequality.

Finally, we note that there are many concentration inequalities for functions of independent random variables other than sums, usually under the assumption that the function is Lipschitz continuous. There are exponential concentration inequalities for functions with “bounded differences”, functions of Gaussian random variables, and convex functions of bounded random variables. References for these results include Chapters 3 and 4 of the lecture notes Probability in High Dimension by Ramon van Handel and the comprehensive monograph Concentration Inequalities by Stéphane Boucheron, Gábor Lugosi, and Pascal Massart.

Analysis of Randomized Trace Estimator

Let us apply some of the concentration inequalities we introduced in last section to analyze the randomized trace estimator. Our goal is not to provide the best possible analysis of the trace estimator,8More precise estimation for trace estimation applied to positive semidefinite matrices was developed by Gratton and Titley-Peloquin; see Theorem 4.5 of the following survey. but to demonstrate how the general concentration inequalities we’ve developed can be useful “out of the box” in analyzing algorithms.

In order to apply Chebyshev’s and Berstein’s inequalities, we shall need to compute or bound the variance of the single-sample trace estimtor x^\top Mx, where x is a random vector of independent \pm 1-values. This is a straightforward task using properties of the variance:

    \begin{equation*} \operatorname{Var}(x^\top M x) = \operatorname{Var}\left( 2\sum_{i< j} M_{ij} x_i x_j \right) = 4\sum_{i\ne j, \: k\ne \ell} M_{ij} M_{k\ell} \operatorname{Cov}(x_ix_j,x_kx_\ell) = 4\sum_{i < j} M_{ij}^2 \le 2 \|M\|_{\rm F}^2 \end{equation*}

Here, \operatorname{Cov} is the covariance and \|\cdot\|_{\rm F} is the matrix Frobenius norm. Chebyshev’s inequality (5), then gives

    \begin{equation*} \mathbb{P} \left\{ \left|T_k - \operatorname{tr}(M)\right| \ge t \right\} \le \frac{2\|M\|_{\rm F}^2}{kt^2}. \end{equation*}

Let’s now try applying an exponential concentration inequality. We shall use Bernstein’s inequality, for which we need to bound |x^\top M x - \operatorname{tr}(M)|. By the Courant–Fischer minimax principle, we know that x^\top M x is between \lambda_{\rm min}(M) \cdot \|x\|^2 and \lambda_{\rm max} (M)\cdot\|x\|^2 where \lambda_{\rm min}(M) and \lambda_{\rm max}(M) are the smallest and largest eigenvalues of M and \|x\| is the Euclidean norm of the vector x. Since all the entries of x have absolute value 1, we have \|x\| = \sqrt{n} so x^\top M x is between n\lambda_{\rm min}(M) and n\lambda_{\rm max}(M). Since the trace equals the sum of the eigenvalues of M, \operatorname{tr}(M) is also between n\lambda_{\rm min}(M) and n\lambda_{\rm max}(M). Therefore,

    \begin{equation*} \left| x^\top M x - \operatorname{tr}(M) \right| \le n \left( \lambda_{\rm max}(M) - \lambda_{\rm min}(M)\right) \le 2 n \|M\|, \end{equation*}

where \|\cdot\| denotes the matrix spectral norm. Therefore, by Bernstein’s inequality (9), we have

    \begin{equation*} \mathbb{P} \left\{ \left| T_k - \operatorname{tr}(M) \right| \ge t \right\} \le 2\exp\left( -\frac{kt^2}{4\|M\|_{\rm F}^2 + 4/3\cdot tn\|M\|} \right). \end{equation*}

In particular, (10) shows that

    \begin{equation*} k \ge \left( \frac{4\|M\|_{\rm F}^2}{\epsilon^2} + \frac{4n\|M\|}{3\epsilon} \right) \log \left(\frac{2}{\delta} \right) \end{equation*}

samples suffice to estimate \operatorname{tr}(M) to error \epsilon with failure probability at most \delta. Concentration inequalities easily furnish estimates for the number of samples needed for the randomized trace estimator.

We have now accomplished our main goal of using concentration inequalities to analyze the randomized trace estimator, which in turn can be used to solve the triangle counting problem. We leave some additional comments on trace estimation and triangle counting in the following bonus section.

More on Trace Estimation and Triangle Counting
To really complete the analysis of the trace estimator in an application (e.g., triangle counting), we would need to obtain bounds on \|M\|_{\rm F} and \|M\|. Since we often don’t know good bounds for \|M\|_{\rm F} and \|M \|, one should really use the trace estimator together with an a posteriori error estimates for the trace estimator, which provide a confidence interval for the trace rather than a point estimate; see sections 4.5 and 4.6 in this survey for details.

One can improve on the Girard–Hutchinson trace estimator by using a variance reduction technique. One such variance reduction technique was recently proposed under the name Hutch++, extending ideas by Arjun Singh Gambhir, Andreas Stathopoulos, and Kostas Orginos and Lin Lin. In effect, these techniques improve the number of samples k needed to estimate the trace of a positive semidefinite matrix A to relative error \epsilon to proportional to 1/\epsilon down from 1/\epsilon^2.

Several algorithms have been proposed for triangle counting, many of them randomized. This survey gives a comparison of different methods for the triangle counting problem, and also describes more motivation and applications for the problem.

Deriving Concentration Inequalities

Having introduced concentration inequalities and applied them to the randomized trace estimator, we now turn to the question of how to derive concentration inequalities. Learning how to derive concentration inequalities is more than a matter of mathematical completeness since one can often obtain better results by “hand-crafting” a concentration inequality for a particular application rather than applying a known concentration inequality. (Though standard concentration inequalities like Hoeffding’s and Bernstein’s often give perfectly adequate answers with much less work.)

Markov’s Inequality

At the most fundamental level, concentration inequalities require us to bound a probability by an expectation. In achieving this goal, we shall make a simple observation: The probability that X is larger than or equal to t is the expectation of a random variable \mathbf{1}_{[t,\infty)}(X).9More generally, the probability of an event can be written as an expectation of the indicator random variable of that event. Here, \mathbf{1}_{[t,\infty)}(\cdot) is an indicator function which outputs one if its input is larger than or equal to t and zero otherwise.

As promised, the probability X is larger than t is the expectation of \mathbf{1}_{[t,\infty)}(X):

(11)   \begin{equation*} \mathbb{P}\{X \ge t \} = \mathbb{E}[\mathbf{1}_{[t,\infty)}(X)]. \end{equation*}

We can now obtain bounds on the probability that X\ge t by bounding its corresponding indicator function. In particular, we have the inequality

(12)   \begin{equation*} \mathbf{1}_{[t,\infty)}(x) \le \frac{x}{t} \mbox{ for every } x\ge 0. \end{equation*}

Since X is nonnegative, combining equations (11) and (12) gives Markov’s inequality:

    \begin{equation*} \mathbb{P}\{ X \ge t \} = \mathbb{E}[\mathbf{1}_{[t,\infty)}(X)] \le \mathbb{E} \left[ \frac{X}{t} \right] = \frac{\mathbb{E} X}{t}. \end{equation*}

Chebyshev’s Inequality

Before we get to Chebyshev’s inequality proper, let’s think about how we can push Markov’s inequality further. Suppose we find a bound on the indicator function \mathbf{1}_{[t,\infty)}(\cdot) of the form

(13)   \begin{equation*} \mathbf{1}_{[t,\infty)}(x) \le f(x) \mbox{ for all } x\ge 0. \end{equation*}

A bound of this form immediately to bounds on \mathbb{P} \{X \ge t\} by (11). To obtain sharp and useful bounds on \mathbb{P}\{X\ge t\} we seek bounding functions f(\cdot) in (13) with three properties:

  1. For x\in[0, t), f(x) should be close to zero,
  2. For x\in [t,\infty), f(x) should be close to one, and
  3. We need \mathbb{E} \, f(X) to be easily computable or boundable.

These three objectives are in tension with each other. To meet criterion 3, we must restrict our attention to pedestrian functions f(\cdot) such as powers f(x) = (x/t)^\theta or exponentials f(x) = \exp(\theta (x-t)) for which we have hopes of computing or bounding \mathbb{E} \, f(X) for random variables X we encounter in practical applications. But these candidate functions f(\cdot) have the undesirable property that making the function smaller on [0, t) (by increasing \theta) to meet point 1 makes the function larger on (t,\infty), detracting from our ability to achieve point 2. We shall eventually come up with a best-possible resolution to this dilemma by formulating this as an optimization problem to determine the best choice of the parameter \theta > 0 to obtain the best possible candidate function of the given form.

Before we get ahead of ourselves, let us use a specific choice for f(\cdot) different than we used to prove Markov’s inequality. We readily verify that f(x) = (x/t)^2 satisfies the bound (13), and thus by (12),

(14)   \begin{equation*} \mathbb{P} \{ X \ge t \} \le \mathbb{E} \left( \frac{X}{t} \right)^2 = \frac{\mathbb{E} X^2}{t^2}. \end{equation*}

This inequality holds for any nonnegative random variable X. In particular, now consider a random variable X which we do not assume to be nonnegative. Then X‘s deviation from its expectation, |X-\mathbb{E} X|, is a nonnegative random variable. Thus applying (14) gives

    \begin{equation*}\mathbb{P} \{ |X - \mathbb{E} X| \ge t \} \le \frac{\mathbb{E} | X - \mathbb{E} X|^2}{t^2} = \frac{\operatorname{Var}(X)}{t^2}. \end{equation*}

We have derived Chebyshev’s inequality! Alternatively, one can derive Chebyshev’s inequality by noting that |X-\mathbb{E} X| \ge t if, and only if, |X-\mathbb{E} X|^2 \ge t^2. Therefore, by Markov’s inequality,

    \begin{equation*} \mathbb{P} \{ |X - \mathbb{E} X| \ge t \} = \mathbb{P} \{ |X - \mathbb{E} X|^2 \ge t^2 \} \le \frac{\mathbb{E} | X - \mathbb{E} X|^2}{t^2} = \frac{\operatorname{Var}(X)}{t^2}. \end{equation*}

The Laplace Transform Method

We shall now realize the plan outlined earlier where we shall choose an optimal bounding function f(\cdot) from the family of exponential functions f(x) = \exp(\theta(x-t)), where \theta > 0 is a parameter which we shall optimize over. This method shall allow us to derive exponential concentration inequalities like Hoeffding’s and Bernstein’s. Note that the exponential function f(x) = \exp(\theta(x-t)) bounds the indicator function \mathbf{1}_{[t,\infty)}(\cdot) for all real numbers x, so we shall no longer require the random variable X to be nonnegative. Therefore, by (11),

(15)   \begin{equation*} \mathbb{P} \{ X \ge t \} \le \mathbb{E} \exp\left(\theta (X-t)\right) = \exp(-\theta t) \,\mathbb{E}  \exp(\theta X) = \exp\left(-\theta t + \log \mathbb{E} \exp(\theta X) \right). \end{equation*}

The functions

    \begin{equation*} m_X(\theta) = \mathbb{E}  \exp(\theta X), \quad \xi_X(\theta) = \log \mathbb{E}  \exp(\theta X) = \log m_X(\theta) \end{equation*}

are known as the moment generating function and cumulant generating function of the random variable X.10These functions are so-named because they are the (exponential) generating functions of the (polynomial) moments \mathbb{E} X^k, k=0,1,2,\ldots, and the cumulants of X. With these notations, (15) can be written

(16)   \begin{equation*} \mathbb{P} \{ X \ge t \} \le\exp(-\theta t) m_X(\theta) = \exp\left(-\theta t + \xi_X(\theta) \right). \end{equation*}

The moment generating function coincides with the Laplace transform \mathbb{E} \exp(-\theta X) = m_X(-\theta) up to the sign of the parameter \theta, so one name for this approach to deriving concentration inequalities is the Laplace transform method. (This method is also known as the Cramér–Chernoff method.)

The cumulant generating function has an important property for deriving concentration inequalities for sums or averages of independent random variables: If X_1,\ldots,X_n are independent random variables, than the cumulant generating function is additive:11For proof, we compute m_{\sum_j X_j}(\theta) = \mathbb{E} \prod_j \exp(\theta X_j) = \prod_j \mathbb{E} \exp(\theta X_j). Taking logarithms proves the additivity.

(17)   \begin{equation*} \xi_{X_1 + \cdots + X_n}(\theta) = \xi_{X_1}(\theta) + \cdots + \xi_{X_n}(\theta). \end{equation*}

Proving Hoeffding’s Inequality

For us to use the Laplace transform method, we need to either compute or bound the cumulant generating function. Since we are interested in general concentration inequalities which hold under minimal assumptions such as boundedness, we opt for the latter. Suppose a\le X \le b and consider the cumulant generating function of Y:=X-\mathbb{E}X. Then one can show the cumulant generating function bound12The bound (18) is somewhat tricky to establish, but we can establish the same result with a larger constant than 1/8. We have |Y| \le b-a =: c. Since the function \theta \mapsto \exp(\theta Y) is convex, we have the bound \exp(\theta Y) \le \exp(-\theta c) + (Y+c)\tfrac{\mathrm{e}^{\theta c} -\mathrm{e}^{-\theta c}}{2c}. Taking expectations, we have m_Y(\theta) \le \cosh(\theta c). One can show by comparing Taylor series that \cosh(\theta c) \le \exp(\theta^2 c^2/2). Therefore, we have \xi_Y(\theta) \le \theta^2c^2/2 = \theta^2(b-a)^2/2.

(18)   \begin{equation*} \xi_Y(\theta) \le \frac{1}{8} \theta^2(b-a)^2. \end{equation*}

Using the additivity of the cumulant generating function (17), we obtain the bound

    \begin{equation*} \xi_{\overline{X} - \mathbb{E} \overline{X}}(\theta) = \xi_{X_1/n- \mathbb{E}X_1/n}(\theta) + \cdots + \xi_{X_n/n- \mathbb{E}X_n/n}(\theta) \le \frac{1}{n} \cdot \frac{1}{8} \theta^2(b-a)^2. \end{equation*}

Plugging this into the probability bound (16), we obtain the concentration bound

(19)   \begin{equation*} \mathbb{P} \left\{ \overline{X} - \mathbb{E} \overline{X} \ge t  \right\} \le \exp \left( - \theta t +\frac{1}{n} \cdot  \frac{1}{8} \theta^2(b-a)^2 \right). \end{equation*}

We want to obtain the smallest possible upper bound on this probability, so it behooves us to pick the value of \theta > 0 which makes the right-hand side of this inequality as small as possible. To do this, we differentiate the contents of the exponential and set to zero, obtaining

    \begin{equation*} \frac{\mathrm{d}}{\mathrm{d} \theta} \left( - \theta t + \frac{1}{n} \cdot \frac{1}{8} \theta^2(b-a)^2\right) = - t + \frac{1}{n} \cdot \frac{1}{4} (b-a)^2 \theta = 0\implies \theta = \frac{4nt}{(b-a)^2} \end{equation*}

Plugging this value for \theta into the bound (19) gives A bound for \overline{X} being larger than \mathbb{E}\overline{X} + t:

(20)   \begin{equation*} \mathbb{P} \left\{ \overline{X} - \mathbb{E} \overline{X} \ge t  \right\} \le \exp \left( - \frac{2nt^2}{(b-a)^2} \right). \end{equation*}

To get the bound on \overline{X} being smaller than \mathbb{E}\overline{X} - t, we can apply a small trick. If we apply (20) to the summands -X_1,-X_2,\ldots,-X_n instead of X_1,\ldots,X_n, we obtain the bounds

(21)   \begin{equation*} \mathbb{P} \left\{ \overline{X} - \mathbb{E} \overline{X} \le -t  \right\} \le \exp \left( - \frac{2nt^2}{(b-a)^2} \right). \end{equation*}

We can now combine the upper tail bound (19) with the lower tail bound (21) to obtain a “symmetric” bound on the probability that |\overline{X} - \mathbb{E}\overline{X}| \ge t. The means of doing often this goes by the fancy name union bound, but the idea is very simple:

    \begin{equation*} \mathbb{P}(\textnormal{A happens or B happens} )\le \mathbb{P}(\textnormal{A happens}) + \mathbb{P}(\textnormal{B happens}). \end{equation*}

Thus, applying this union bound idea with the upper and lower tail bounds (20) and (21), we obtain Hoeffding’s inequality, exactly as it appeared above as (8):

    \begin{align*} \mathbb{P}\left\{ |\overline{X} - \mathbb{E}\overline{X}| \ge t \right\} &= \mathbb{P}\left\{ \overline{X} - \mathbb{E}\overline{X} \ge t \textnormal{ or } \overline{X} - \mathbb{E}\overline{X} \le -t  \right\}\\ &\le \mathbb{P} \left\{ \overline{X} - \mathbb{E} \overline{X} \ge t  \right\} + \mathbb{P} \left\{ \overline{X} - \mathbb{E} \overline{X} \le -t  \right\}\\ &\le 2\exp \left( - \frac{2nt^2}{(b-a)^2} \right). \end{align*}

Voilà! Hoeffding’s inequality has been proven! Bernstein’s inequality is proven essentially the same way except that, instead of (17), we have the cumulant generating function bound

    \begin{equation*} \xi_Y(\theta) = \frac{(\theta^2/2)\Var(Y)}{1-|\theta|/3} \end{equation*}

for a random variable Y with mean zero and satisfying the bound |Y| \le B.

Upshot: Randomness can be a very effective tool for solving computational problems, even those which seemingly no connection to probability like triangle counting. Concentration inequalities are a powerful tool for assessing how many samples are needed for an algorithm based on random sampling to work. Some of the most useful concentration inequalities are exponential concentration inequalities like Hoeffding and Bernstein, which show that an average of bounded random quantities are close to their average except with exponentially small probability.

Don’t Solve the Normal Equations

The (ordinary) linear least squares problem is as follows: given an m\times n matrix A and a vector b of length m, find the vector x such that Ax is as close to b as possible, when measured using the two-norm \| \cdot \|. That is, we seek to

(1)   \begin{equation*} \mbox{find } x \in\mathbb{R}^n \mbox{ such that }\| b - Ax \|^2 = \sum_{i=1}^m \left(b_i - \sum_{j=1}^n A_{ij} x_j \right)^2 \mbox{ is minimized}. \end{equation*}

From this equation, the name “least squares” is self-explanatory: we seek x which minimizes the sum of the squared discrepancies between the entries of b and Ax.

The least squares problem is ubiquitous in science, engineering, mathematics, and statistics. If we think of each row a_i of A as an input and its corresponding entry b_i of b as an output, then the solution x to the least squares model gives the coefficients of a linear model for the input–output relationship. Given a new previously unseen input a_{\rm new}, our model predicts the output b_{\rm new} is approximately b_{\rm new} \approx a_{\rm new}^\top x = \sum_{i=1}^n x_i (a_{\rm new})_i. The vector x consists of coefficients for this linear model. The least squares solution satisfies the property that the average squared difference between the output b_i and the prediction a_i^\top x is as small as it could possibly be for all choices of coefficient vectors x.

How do we solve the least squares problem? A classical solution approach, ubiquitous in textbooks, is to solve a system of linear equations known as the normal equations. The normal equations associated with the least squares problem (1) are given by

(2)   \begin{equation*} A^\top A \,x = A^\top b. \end{equation*}

This system of equations always has a solution. If A has full column-rank, then A^\top A is invertible and the unique least squares solution to (1) is given by (A^\top A)^{-1} A^\top b. We assume that A has full column-rankQ for the rest of this discussion. To solve the normal equations in software, we compute A^\top A and A^\top b and solve (2) using a linear solver like MATLAB’s “\”.1Even better, we could us a Cholesky decomposition since the matrix A^\top A is positive definite. (As is generally true in matrix computations, it is almost never a good idea to explicitly form the inverse of the matrix A^\top A, or indeed any matrix.) We also can solve the normal equations using an iterative method like (preconditioned) conjugate gradient.

The purpose of the article is to advocate against the use of the normal equations for solving the least squares problems, at least in most cases. So what’s wrong with the normal equations? The problem is not that the normal equations aren’t mathematically correct. Instead, the problem is that the normal equations often lead to poor accuracy for the least squares solution using computer arithmetic.

Most of the time when using computers, we store real numbers as floating point numbers.2One can represent rational numbers on a computer as fractions of integers and operations can be done exactly. However, this is prone to gross inefficiencies as the number of digits in the rational numbers can grow to be very large, making the storage and time to solve linear algebra problems with rationals dramatically more expensive. For these reasons, the vast majority of numerical computations use floating point numbers which store only a finite number of digits for any given real number. In this model, except for extremely rare circumstances, rounding errors during arithmetic operations are a fact of life. At a coarse level, the right model to have in your head is that real numbers on a computer are stored in scientific notation with only 16 decimal digits after the decimal point.3This is a simplification in multiple ways. First, computers store numbers in binary and thus, rather than storing 16 decimal digits after the decimal point, they store 52 binary digits. This amounts to roughly 16 decimal digits. Secondly, there are different formats for storing real numbers as floating point on a computer with different amounts of stored digits. The widely used IEEE double precision format has about 16 decimal digits of accuracy; the IEEE single precision format has roughly 8. When two numbers are added, subtracted, multiplied, and divided, the answer is computed and then rounded to 16 decimal digits; any extra digits of information are thrown away. Thus, the result of our arithmetic on a computer is the true answer to the arithmetic problem plus a small rounding error. These rounding errors are small individually, but solving an even modestly sized linear algebra problem requires thousands of such operations. Making sure many small errors don’t pile up into a big error is part of the subtle art of numerical computation.

To make a gross simplification, if one solves a system of linear equations Mx = c on a computer using a well-designed piece of software, one obtains an approximate solution \hat{x} which is, after accounting for the accumulation of rounding errors, close to x. But just how close the computed solution \hat{x} and the true solution x are depends on how “nice” the matrix M is. The “niceness” of a matrix M is quantified by a quantity known as the condition number of M, which we denote \kappa(M).4In fact, there are multiple definitions of the condition number depending on the norm which one uses the measure the sizes of vectors. Since we use the 2-norm, the appropriate 2-norm condition number \kappa(M) is the ratio \kappa(M) = \sigma_{\rm max}(M)/\sigma_{\rm min}(M) of the largest and smallest singular values of M. As a rough rule of thumb, the relative error between x and \hat{x} is roughly bounded as

(3)   \begin{equation*} \frac{\| \hat{x} - x \|}{\|x\|} \lessapprox \kappa(M)\times 10^{-16}. \end{equation*}

The “10^{-16} corresponds to the fact we have roughly 16 decimal digits of accuracy in double precision floating point arithmetic. Thus, if the condition number of M is roughly 10^{10}, then we should expect around 6 digits of accuracy in our computed solution.

The accuracy of the least squares problem is governed by its own condition number \kappa(A). We would hope that we can solve the least squares problem with an accuracy like the rule-of-thumb error bound (3) we had for linear systems of equations, namely a bound like \|\hat{x} - x\|/\|x\| \lessapprox \kappa(A)\times 10^{-16}. But this is not the kind of accuracy we get for the least squares problem when we solve it using the normal equations. Instead, we get accuracy like

(4)   \begin{equation*} \frac{\| \hat{x} - x \|}{\|x\|} \lessapprox \left(\kappa(A)\right)^2\times 10^{-16}. \end{equation*}

By solving the normal equations we effectively square the condition number! Perhaps this is not surprising as the normal equations also more-or-less square the matrix A by computing A^\top A. This squared condition number drastically effects the accuracy of the computed solution. If the condition number of A is 10^{8}, then the normal equations give us absolute nonsense for \hat{x}; we expect to get no digits of the answer x correct. Contrast this to above, where we were able to get 6 correct digits in the solution to Mx = c despite the condition number of M being 100 times larger than A!

All of this would be just a sad fact of life for the least squares problem if the normal equations and their poor accuracy properties were the best we could do for the least squares problem. But we can do better! One can solve linear least squares problems by computing a so-called QR factorization of the matrix A.5In MATLAB, the least squares problem can be solved with QR factorization by calling “A\b”. Without going into details, the upshot is that the least squares solution by a well-designed6One way of computing the QR factorization is by Gram–Schmidt orthogonalization, but the accuracy properties of this are poor too. A gold-standard way of computing the QR factorization by means of Householder reflectors, which has excellent accuracy properties. QR factorization requires a similar amount of time to solving the normal equations and has dramatically improved accuracy properties, achieving the desirable rule-of-thumb behavior7More precisely, the rule of thumb is like \|\hat{x} - x\|/\|x\| \lessapprox \kappa(A)\times 10^{-16} \times(1+ \kappa(A)\| b - Ax \|/(\|A\|\|b\|)). So even if we solve the least squares problem with QR factorization, we still get a squared condition number in our error bound, but this condition number squared is multiplied by the residual \|b-Ax\|, which is small if the least squares fit is good. The least squares solution is usually only interesting when the residual is small, thus justifying dropping it in the rule of thumb.

(5)   \begin{equation*} \frac{\| \hat{x} - x \|}{\|x\|} \lessapprox \kappa(A)\times 10^{-16}. \end{equation*}

I have not described how the QR factorization is accurately computed nor how to use the QR factorization to solve least squares problems nor even what the QR factorization is. All of these topics are explained excellently by the standard textbooks in this area, as well as by publicly available resources like Wikipedia. There’s much more that can be said about the many benefits of solving the least squares problem with the QR factorization,8E.g., it can work for sparse matrices while the normal equations often do not, it has superior accuracy to Gaussian elimination with partial pivoting even for solving linear systems, the “Q” matrix in the QR factorization can be represented implicitly as a product of easy-to-compute-with Householder reflectors which is much more efficient whenm\gg n, etc. but in the interest of brevity let me just say this: TL;DR when presented in the wild with a least squares problem, the solution method one should default to is one based on a well-implemented QR factorization, not solving the normal equations.

Suppose for whatever reason we don’t have a high quality QR factorization algorithm at our disposal. Must we then resort to the normal equations? Even in this case, there is a way we can reduce the problem of solving a least squares problems to a linear system of equations without squaring the condition number! (For those interested, to do this, we recognize the normal equations as a Schur complement of a somewhat larger system of linear equations and then solve that. See Eq. (7) in this post for more discussion of this approach.) 

The title of this post Don’t Solve the Normal Equations is deliberately overstated. There are times when solving the normal equations is appropriate. If A is well-conditioned with a small condition number, squaring the condition number might not be that bad. If the matrix A is too large to store in memory, one might want to solve the least squares problem using the normal equations and the conjugate gradient method.

However, the dramatically reduced accuracy of solving the normal equations should disqualify the approach from being the de-facto way of solving least squares problems. Unless you have good reason to think otherwise, when you see A^\top A, solve a different way.

Sherman–Morrison for Integral Equations

In this post, I want to discuss two of my favorite topics in applied mathematics: the Sherman–Morrison formula and integral equations. As a bridge between these ideas, we’ll use an integral equation analog of the Sherman–Morrison formula to derive the solution for the Laplace equation with Dirichlet boundary conditions in the 2D disk.

Laplace’s Equation

Suppose we have a thin, flat (two-dimensional) plate of homogeneous material and we measure the temperature at the border. What is the temperature inside the material? The solution to this problem is described by Laplace’s equation, one of the most ubiquitous partial differential equations in physics. Let u(x,y) denote the temperature of the material at point (x,y). Laplace’s equation states that, at any point (x,y) on the interior of the material,

(1)   \begin{equation*} \frac{\partial^2}{\partial x^2} u(x,y) + \frac{\partial^2}{\partial y^2} u(x,y) = 0. \end{equation*}

Laplace’s equation (1) and the specification of the temperature on the boundary form a well-posed mathematical problem in the sense that the temperature is uniquely determined at each point (x,y).1A well-posed problem is also required to depend continuously on the input data which, in this case, are the boundary temperatures. Indeed, the Laplace problem with boundary data is well-posed in this sense. We call this problem the Laplace Dirichlet problem since the boundary conditions

    \begin{equation*} u(x,y) \quad \text{is specified for $(x,y)$ on the boundary} \end{equation*}

are known as Dirichlet boundary conditions.

The Double Layer Potential

Another area of physics where the Laplace equation (1) appears is the study of electrostatics. In this case, u(x,y) represents the electric potential at the point (x,y). The Laplace Dirichlet problem is to find the electric potential in the interior of the region with knowledge of the potential on the boundary.

The electrostatic application motivates a different way of thinking about the Laplace equation. Consider the following question:

How would I place electric charges on the boundary to produce the electric potential u(x,y) at each point (x,y) on the boundary?

This is a deliciously clever question. If I were able to find an arrangement of charges answering the question, then I could calculate the potential u(x,y) at each point (x,y) in the interior by adding up the contribution to the electric potential of each element of charge on the boundary. Thus, I can reduce the problem of finding the electric potential at each point (x,y) in the 2D region to finding a charge distribution on the 1D boundary to that region.

We shall actually use a slight variant of this charge distribution idea which differs in two ways:

  • Rather than placing simple charges on the boundary of the region, we place charge dipoles.2The reason for why this modification works better is an interesting question, but answering it properly would take us too far afield from the goals of this article.
  • Since we are considering a two-dimensional problem, we use a different formula for the electric potential than given by Coulomb’s law for charges in 3D. Also, since we are interested in solving the Laplace Dirichlet problem in general, we can choose a convenient dimensionless system of units. We say that the potential at a point (x,y) induced by a unit “charge” at the origin is given by 1/2\pi \cdot \ln \sqrt{x^2+y^2}.

With these modifications, our new question is as follows:

How would I place a density of “charge” dipoles \phi(x,y) on the boundary to produce the electric potential u(x,y) at each point (x,y) on the boundary?

We call this function \phi(x,y) the double layer potential for the Laplace Dirichlet problem. One can show the double layer potential satisfies a certain integral equation. To write down this integral equation, let’s introduce some more notation. Let R be the region of interest and \partial R its boundary. Denote points (x,y) concisely as vectors \mathbf{r} = (x,y), with the length of \mathbf{r} denoted \|\mathbf{r}\| = \sqrt{x^2+y^2}. The double layer potential satisfies

(2)   \begin{equation*} \frac{1}{2} \phi(\mathbf{r}) + \frac{1}{2\pi} \int_{\partial R} \phi(\mathbf{r}') \frac{\partial}{\partial \nu_{\mathbf{r}'}} \ln \|\mathbf{r}-\mathbf{r}'\|  \, dS(\mathbf{r}') = u(\mathbf{r}), \end{equation*}

where the integral is taken over the surface \partial R of the region R; \partial / \partial \nu_{\mathbf{r}'} denotes the directional derivative taken in the direction normal (perpendicular) to the surface at the point \mathbf{r}'. Note we choose a unit system for \phi which hides physical constants particular to the electrostatic context, since we are interested in applying this methodology to the Laplace Dirichlet problem in general (possibly non-electrostatic) applications.

There’s one last ingredient: How do we compute the electric potential u(\mathbf{r}) at points \mathbf{r} in the interior of the region? This is answered by the following formula:

(3)   \begin{equation*} u(\mathbf{r}) = \frac{1}{2\pi} \int_{\partial R} \phi(\mathbf{r}') \frac{\partial}{\partial \nu_{\mathbf{r}'}} \ln \|\mathbf{r}-\mathbf{r}'\| \, dS(\mathbf{r}'). \end{equation*}

The integral equation (2) is certainly nothing to sneeze at. Rather than trying to comprehend it in its full glory, we shall focus on a special case for the rest of our discussion. Suppose the region R is a circular disk with radius r centered at 0. The the partial derivative in the integrand in (2) then is readily computed for points \mathbf{r} and \mathbf{r}' both on the boundary \partial R of the circle:

    \begin{equation*} \frac{\partial}{\partial \nu_{\mathbf{r}'}} \ln \|\mathbf{r}-\mathbf{r}'\| = \frac{1}{2r}. \end{equation*}

Substituting in (2) then gives

(4)   \begin{equation*} \frac{1}{2} \phi(\mathbf{r}) + \frac{1}{4\pi r} \int_{\partial R} \phi(\mathbf{r}')  \, dS(\mathbf{r}') = u(\mathbf{r}). \end{equation*}

The Sherman–Morrison Formula

We are interested in solving the integral equation (4) to obtain an expression for the double-layer potential \phi, as this will give us a solution formula u(\mathbf{r}) for the Laplace Dirichlet problem. Ultimately, we accomplish this by using a clever trick. In an effort to make this trick seem more self-evident and less of a “rabbit out of a hat”, I want to draw an analogy to a seemingly unrelated problem: rank-one updates to linear systems of equations and the Sherman–Morrison formula.3In accordance with Stigler’s law of eponymy, the Sherman–Morrison formula was actually discovered by William J. Duncan five years before Sherman, Morrison, and Woodbury. For a more general perspective on the Sherman–Morrison formula and its generalization to the Sherman–Morrison–Woodbury formula, you may be interested in the following post of mine on Schur complements and block Gaussian elimination.

Suppose we want to solve the system of linear equations

(5)   \begin{equation*} (A + uv^\top) x = b, \end{equation*}

where A is an n\times n square matrix and u, v, and b are length-n vectors. We are ultimately interested in finding x from b. To gain insight into this problem, it will be helpful to first carefully considered the problem in reverse: computing b from x. We could, of course, perform this computation by forming the matrix A + uv^\top in memory and multiplying it with x, but there is a more economical way:

  1. Form \alpha = v^\top x.
  2. Compute b = \alpha u + Ax.

Standing back, observe that we now have a system of n+1 equations for unknowns x and \alpha. Specifically, our first equation can be rewritten as

    \begin{equation*} -\alpha + v^\top x = 0 \end{equation*}

which combined with the second equation

    \begin{equation*} \alpha u + Ax = b \end{equation*}

gives the n+1 by n+1 system4This “state space approach” of systematically writing out a matrix–vector multiply algorithm and then realizing this yields a larger system of linear equations was initially taught to be by my mentor Shiv Chandrasekaran; this approach has much more powerful uses, such as in the theory of rank-structured matrices.

(6)   \begin{equation*} \begin{bmatrix} - 1 & v^\top \\ u & A \end{bmatrix} \begin{bmatrix} \alpha \\ x \end{bmatrix} = \begin{bmatrix} 0 \\ b \end{bmatrix}. \end{equation*}

The original equation for x (5) can be derived from the “lifted” equation (6) by applying Gaussian elimination and eliminating the first row of the linear system (6). But now that we have the lifted equation (6), one can naturally wonder what would happen if we instead used Gaussian elimination to eliminate the last n rows of (6); this will give us an equation for \alpha = v^\top x which we can solved without first computing x. Doing this so-called block Gaussian elimination yields

    \begin{equation*} (-1-v^\top A^{-1}u)\alpha = -v^\top A^{-1} b. \end{equation*}

Solving this, we deduce that

    \begin{equation*} \alpha = \frac{v^\top A^{-1} b}{1 + v^\top A^{-1}u}. \end{equation*}

From the equation \alpha u + Ax = b, we have that

    \begin{equation*} x = (A+uv^\top)^{-1}b = A^{-1}b - \alpha A^{-1} u = A^{-1}b - \frac{A^{-1}uv^\top A^{-1} b}{1 + v^\top A^{-1}u}. \end{equation*}

Since this formula holds for every vector b, we deduce the famous Sherman–Morrison formula

    \begin{equation*} (A+uv^\top)^{-1}= A^{-1} - \frac{A^{-1}uv^\top A^{-1}}{1 + v^\top A^{-1}u}. \end{equation*}

This example shows how it can be conceptually useful to lift a linear system of equations by adding additional variables and equations and then “do Gaussian elimination in a different order”. The same insight shall be useful in solving integral equations like (4).

Solving for the Double Layer Potential

Let’s try repeating the playbook we executed for the rank-one-updated linear system (5) and apply it to the integral equation (4). We are ultimately interested in computing \phi(\cdot) from u(\cdot) but, as we did last section, let’s first consider the reverse. To compute u(\cdot) from \phi(\cdot), we first evaluate the integral

    \begin{equation*} \alpha =  \int_{\partial R} \phi(\mathbf{r}') \, dS(\mathbf{r}'). \end{equation*}

Substituting this into (4) gives the system of equations

(7)   \begin{equation*} \frac{1}{4\pi r} \alpha + \frac{1}{2} \phi(\mathbf{r})  = u(\mathbf{r}), \end{equation*}

(8)   \begin{equation*} -\alpha + \int_{\partial R} \phi(\mathbf{r}') \, dS(\mathbf{r}') = 0. \end{equation*}

In order to obtain (4) from (7) and (8), we add 1/4\pi r times equation (8) to equation (7). Following last section, we now instead eliminate \phi(\mathbf{r}) from equation (8) using equation (7). To do this, we need to integrate equation (7) in order to cancel the integral in equation (8):

    \begin{equation*} \frac{1}{4\pi r} \alpha \underbrace{\int_{\partial R} \, dS(\mathbf{r}')}_{=2\pi r} + \frac{1}{2} \int_{\partial R} \phi(\mathbf{r}') \, dS(\mathbf{r}')  = \frac{1}{2} \alpha + \frac{1}{2} \int_{\partial R} \phi(\mathbf{r}') \, dS(\mathbf{r}') = \int_{\partial R} u(\mathbf{r}') \, dS(\mathbf{r}'). \end{equation*}

Adding 2 times this integrated equation to equation (8) yields

    \begin{equation*} 2\alpha = 2\int_{\partial R} u(\mathbf{r}') \, dS(\mathbf{r}') \implies \alpha = \int_{\partial R} u(\mathbf{r}') \, dS(\mathbf{r}'). \end{equation*}

Thus plugging this expression for \alpha into equation (7) yields

    \begin{equation*} \phi(\mathbf{r}) = 2u(\mathbf{r}) - \frac{1}{2\pi r} \alpha = 2u(\mathbf{r}) - \frac{1}{2\pi r} \int_{\partial R} u(\mathbf{r}') \, dS(\mathbf{r}'). \end{equation*}

We’ve solved for our double layer potential!

As promised, the double layer potential can be used to give a solution formula (known as the Poisson integral formula) for the Laplace Dirichet problem. The details are a mechanical, but also somewhat technical, exercise in vector calculus identities. We plug through the details in the following extra section.

Poisson Integral Formula
Let’s finish this up by using the double layer to derive a solution formula for the electric potential u(\mathbf{r}) at a point \mathbf{r} in the interior of the region. To do this, we use equation (3):

(9)   \begin{align*} u(\mathbf{r}) &= \frac{1}{2\pi} \int_{\partial R} \left( 2u(\mathbf{r}') - \frac{1}{2\pi r} \int_{\partial R} u(\mathbf{r}'') \, dS(\mathbf{r}'') \right) \frac{\partial}{\partial \nu_{\mathbf{r}'}} \ln \|\mathbf{r}-\mathbf{r}'\| \, dS(\mathbf{r}') \\ &= \frac{1}{\pi} \int_{\partial R} u(\mathbf{r}')\frac{\partial}{\partial \nu_{\mathbf{r}'}} \ln \|\mathbf{r}-\mathbf{r}'\| \, dS(\mathbf{r}') - \frac{1}{4\pi^2 r} \int_{\partial R} \frac{\partial}{\partial \nu_{\mathbf{r}'}} \ln\|\mathbf{r}-\mathbf{r}'\|  \, dS(\mathbf{r}')\int_{\partial R} u(\mathbf{r}') \, dS(\mathbf{r}'). \end{align*}

We now need to do a quick calculation which is somewhat technical and not particularly enlightening. We evaluate -\frac{1}{4\pi}\int_{\partial R} \frac{\partial}{\partial \nu_{\mathbf{r}'}} \frac{1}{|\mathbf{r}-\mathbf{r}'|}  \, dS(\mathbf{r}') using the divergence theorem:

    \begin{align*} \frac{1}{2\pi}\int_{\partial R} \frac{\partial}{\partial \nu_{\mathbf{r}'}} \ln \|\mathbf{r}-\mathbf{r}'\|  \, dS(\mathbf{r}') &= \frac{1}{2\pi}\int_{\partial R} \nu_{\mathbf{r}'}\cdot \nabla_{\mathbf{r}'} \ln \|\mathbf{r}-\mathbf{r}'\|  \, dS(\mathbf{r}') = \frac{1}{2\pi}\int_R \nabla_{\mathbf{r}'}^2 \ln\|\mathbf{r}-\mathbf{r}'\|  \, d\mathbf{r}' \\ &= \int_R \delta(\mathbf{r}'-\mathbf{r}) \, d\mathbf{r}' = 1. \end{align*}

We denote \nabla for the gradient, \nu_{\mathbf{r}'} for the normal vector to \partial R at the point \mathbf{r}', \cdot for the dot product, \nabla^2 for the Laplace operator, and \delta for the Dirac delta “function”. The last equality holds because the function v(\mathbf{r}') = \tfrac{1}{2\pi} \ln \| \mathbf{r} - \mathbf{r}'\| is a so-called fundamental solution for Laplace equation in the sense that \nabla_{\mathbf{r}'}^2 v(\mathbf{r}') = \delta(\mathbf{r}'-\mathbf{r}). Therefore, (9) simplifies to

    \begin{equation*} \mathbf{u}(\mathbf{r}) = \frac{1}{\pi} \int_{\partial R} u(\mathbf{r}')\frac{\partial}{\partial \nu_{\mathbf{r}'}} \ln \|\mathbf{r}-\mathbf{r}'\| \, dS(\mathbf{r}') - \frac{1}{2\pi r} \int_{\partial R} u(\mathbf{r}') \, dS(\mathbf{r}'). \end{equation*}

Computing the boundary derivative for the spherical region centered at the origin with radius r, we obtain the formula

    \begin{equation*} \mathbf{u}(\mathbf{r}) = \frac{1}{2\pi r} \int_{\partial R} \left( 2\frac{r^2 - \mathbf{r} \cdot \mathbf{r}'}{\|\mathbf{r} - \mathbf{r}'\|^2} - 1 \right) u(\mathbf{r}') \, dS(\mathbf{r}') = \frac{r^2 - \| \mathbf{r} \|^2}{2\pi r} \int_{\partial R} \frac{u(\mathbf{r}')}{\|\mathbf{r} - \mathbf{r}'\|^2} \, dS(\mathbf{r}'). \end{equation*}

We’ve succeeded at deriving a solution formula for u(\mathbf{r}) for points \mathbf{r} in the interior of the disk in terms of u(\mathbf{r}') for points \mathbf{r}' on the boundary of the disk. This is known as the Poisson integral formula for the disk in two dimensions. This formula can be generalized to balls in higher dimensions, though this proof technique using “Sherman–Morrison” fails to work in more than two dimensions.

Sherman–Morrison for Integral Equations

Having achieved our main goal of deriving a solution formula for the 2D Laplace Dirichlet problem for a circular domain, I want to take a step back to present the approach from two sections ago in more generality. Consider a more general integral equation of the form

(10)   \begin{equation*} a \, \phi(\mathbf{x}) + \int_\Omega K(\mathbf{x},\mathbf{y}) \phi(\mathbf{y}) \, d\mathbf{y} = f(\mathbf{x}), \quad \mathbf{x} \in \Omega, \end{equation*}

where \Omega is some region in space, K(\cdot,\cdot), f(\cdot), and \phi(\cdot) are functions of one or two arguments on \Omega, and a\ne 0 is a nonzero constant. Such an integral equation is said to be of the second kind. The integral equation for the Laplace Dirichlet problem (2) is of this form with \Omega = \partial R, a = 1/2, K(\mathbf{x},\mathbf{y}) = \tfrac{\partial}{\partial \nu_{\mathbf{y}}} \ln \|\mathbf{x} - \mathbf{y}\|, and f(\mathbf{x}) = u(\mathbf{x}). We say the kernel K(\cdot,\cdot) is separable with rank k if K(\cdot,\cdot) can be expressed in the form

    \begin{equation*} K(\mathbf{x},\mathbf{y}) = g_1(\mathbf{x})h_1(\mathbf{y}) + g_2(\mathbf{x})h_2(\mathbf{y}) + \cdots + g_k(\mathbf{x})h_k(\mathbf{y}). \end{equation*}

With the circular domain, the Laplace Dirichlet integral equation (2) is separable with rank k = 1.5E.g., set g_1(\mathbf{x}) = 1 and h_1(\mathbf{y}) = 1/4\pi r. We shall focus on the second kind integral equation (10) assuming the kernel is separable with rank 1 (for simplicity, we set a = 1):

(11)   \begin{equation*} \phi(\mathbf{x}) + g(\mathbf{x}) \int_\Omega h(\mathbf{y}) \phi(\mathbf{y}) \, d\mathbf{y} = f(\mathbf{x}), \quad \mathbf{x} \in \Omega. \end{equation*}

Let’s try and write this equation in a way that’s more similar to the linear system of equation (5). To do this, we make use of linear operators defined on functions:

  • Let \operatorname{Id} denote the identity operator on functions: It takes as inputs function \phi(\cdot) and outputs the function \phi(\cdot) unchanged.
  • Let I_h denote the “integration against h operator”: It takes as input a function \phi(\cdot) and outputs the number \int_\Omega h(\mathbf{y})\phi(\mathbf{y}) \, d\mathbf{y}.

With these notations, equation (11) can be written as

    \begin{equation*} (\operatorname{Id} + g I_h) \phi = f. \end{equation*}

Using the same derivation which led to the Sherman–Morrison formula for linear systems of equations, we can apply the Sherman–Morrison formula to this integral equation in operator form, yielding

    \begin{equation*} \phi = \left( \operatorname{Id}^{-1} - \frac{\operatorname{Id}^{-1}g I_h \operatorname{Id}^{-1}}{1 + I_h \operatorname{Id}^{-1} g} \right)f = f - \frac{g (I_h f)}{1+I_h g}. \end{equation*}

Therefore, the solution to the integral equation (11) is

    \begin{equation*} \phi(\mathbf{x}) = f(\mathbf{x}) - \frac{\int_\Omega h(\mathbf{y}) f(\mathbf{y}) \, d\mathbf{y}}{1 + \int_\Omega h(\mathbf{y})g(\mathbf{y}) \, d\mathbf{y} } g(\mathbf{x}). \end{equation*}

This can be interpreted as a kind of Sherman–Morrison formula for the integral equation (11).

One can also generalize to provide a solution formula for the second-kind integral equation (10) for a separable kernel with rank k; in this case, the natural matrix analog is now the Sherman–Morrison–Woodbury identity rather than the Sherman–Morrison formula. Note that this solution formula requires the solution of a k\times k system of linear equations. One can use this as a numerical method to solve second-kind integral equations: First, we approximate K by a separable kernel of a modest rank k and then compute the exact solution of the resulting integral equation with the approximate kernel.6A natural question is why one might want to solve an integral equation formulation of a partial differential equations like the Laplace or Helmholtz equation. An answer is that that formulations based on second-kind integral equations tend to lead to systems of linear equations which much more well-conditioned as compared to other methods like the finite element method. They have a number of computational difficulties as well, as the resulting linear systems of equations are dense and may require elaborate quadrature rules to accurately compute.

My goal in writing this post was to discuss two topics which are both near and dear to my heart, integral equations and the Sherman–Morrison formula. I find the interplay of these two ideas to be highly suggestive. It illustrates the power of the analogy between infinite-dimensional linear equations, like differential and integral equations, and finite-dimensional ones, which are described by matrices. Infinite dimensions certainly do have their peculiarities and technical challenges, but it can be very useful to first pretend infinite-dimensional linear operators (like integral operators) are matrices, do calculations to derive some result, and then justify these computations rigorously post hoc.7The utility of this technique is somewhat of an open secret among some subset of mathematicians and scientists, but such heuristics are usually not communicated to students explicitly, at least in rigorous mathematics classes.

The Vandermonde Decomposition

In this post, I want to discuss a beautiful and somewhat subtle matrix factorization known as the Vandermonde decomposition that appears frequently in signal processing and control theory. We’ll begin from the very basics, introducing the controls-and-signals context, how the Vandermonde decomposition comes about, and why it’s useful. By the end, I’ll briefly share how we can push the Vandermonde decomposition beyond matrices to the realm of tensors, which will can allow us to separate mixed signals from multiple measurements. This tensorial generalization plays an important role in my paper (L_r,L_r,1)-decompositions, sparse component analysis, and the blind separation of sums of exponentials, joint work with Nithin Govindajaran and Lieven De Lathauwer, which recently appeared in the SIAM Journal of Matrix Analysis and Applications.

Finding the Frequencies

Suppose I give you a short recording of a musical chord consisting of three notes. How could you determine which three notes they were? Mathematically, we can represent such a three-note chord as a combination of scaled and shifted cosine functions

(1)   \[f(t) = a_0 \cos(\omega_0 t - \phi_0) + a_1 \cos(\omega_1 t - \phi_1) + a_2 \cos(\omega_2 t - \phi_2). \]

We are interested in obtaining the (angular) frequencies \omega_1, \omega_2, and \omega_3.

In the extreme limit, when we are given the values of the signal for all t, both positive and negative, the frequencies are immediately given by taking a Fourier transform of the function f(\cdot). In practice, we only have access to the function f at certain times t_0,\ldots,t_{n-1} which we assume are equally spaced

    \[t_j = j\Delta t \quad \textnormal{for} \quad j = 0,1,2,\ldots,n-1.\]

Given the samples

    \[f_j = f(t_j) \quad \textnormal{for} \quad j = 0,1,2,\ldots,n-1\]

we could try to identify \omega_1, \omega_2, and \omega_3 using a discrete Fourier transform.1The discrete Fourier transform can be computed very quickly using the fast Fourier transform, as I discussed in a previous post. Unfortunately, this generally requires a large number of samples to identify \omega_1, \omega_2, and \omega_3 accurately. (The accuracy scales roughly like 1/n, where n is the number of samples.) We are interested in finding a better way to identify the frequencies.

Now that we’ve moved from the function f(\cdot), defined for any real input t, to a set of samples f_0,f_1,\ldots,f_{n-1} it will be helpful to rewrite our formula (1) for f in a different way. By Euler’s identity, the cosines can be rewritten as

    \[\cos \alpha = \frac{\mathrm{e}^{\mathrm{i} \alpha}+\mathrm{e}^{-\mathrm{i} \alpha}}{2}.\]

As a consequence, we can rewrite one of the frequency components in (1) as

    \[a_0 \cos(\omega_0 t - \phi_0) = d_0 \mathrm{e}^{\mathrm{i} \omega_0t} + d_1 \mathrm{e}^{-\mathrm{i} \omega_0t}.\]

Here, d_0 and d_1 are complex coefficients d_0 = a_0 \mathrm{e}^{-\mathrm{i} \phi_0}/2 and d_1 = a_0 \mathrm{e}^{\mathrm{i} \phi_0}/2 which contain the same information as the original parameters a_0 and \phi_0. Now notice that we are only interest in values t_j = j\, \Delta t which are multiples of the spacing \Delta t. Thus, our frequency component can be further rewritten as

    \[a_0 \cos(\omega_0 t_j - \phi_0) = d_0 z_0^j + d_1 z_1^j\]

where z_0 := \mathrm{e}^{\mathrm{i} \omega_0\, \Delta t} and z_1 := \mathrm{e}^{-\mathrm{i}\omega_0\, \Delta t}. Performing these reductions, our samples f_j take the form

(2)   \[f_j = d_0 z_0^j + d_1 z_1^j + \cdots + d_5 z_5^j. \]

We’ve now reformulated our frequency problems in identifying the parameters d_0,\ldots,d_5 and z_0,\ldots,z_5 in the relation (2) from a small number of measurements f_0,f_1,\ldots,f_{n-1}.

Frequency Finding as a Matrix Factorization

We will return to the algorithmic problem of identifying the parameters in the relation (2) from measurements in a little bit. First, we will see that (2) can actually be written as a matrix factorization. Understanding computations by matrix factorization has been an extremely successful paradigm in applied mathematics, and we will see in this post how viewing (2) as a matrix factorization can be very useful.

While it may seem odd at first,2As pointed out to me on math stack exchange, one reason forming the Hankel matrix is sensible is because it effectively augments the sequence of numbers f_0,f_1,\ldots,f_{n-1} into a sequence of vectors given by the columns of H. This can reveal patterns in the sequence which are less obvious when it is represented as given just as numbers. For instance, any seven columns of H are linearly dependent, a surprising fact since the columns of H have length (n-1)/2+1 which can be much larger than seven. In addition, as we will soon effectively exploit later, vectors in the nullspace of H (or related Hankel matrices derived from the sequence) give recurrence relations obeyed by the sequence. This speaks to a general phenomenon where properties of sequence (say, arising from snapshots of a dynamical system) can sometimes become more clear by this procedure of delay embedding. it will be illuminating to repackage the measurements f_0,f_1,\ldots,f_{n-1} as a matrix:

(3)   \[H = \begin{bmatrix} f_0 & f_1 & f_2 & \cdots & f_{(n-1)/2} \\f_1 & f_2 & f_3 & \cdots & f_{(n-1)/2+1} \\f_2 & f_3 & f_4 & \cdots & f_{(n-1)/2+2} \\\vdots & \vdots & \vdots & \ddots & \vdots \\f_{(n-1)/2} & f{(n-1)/2+1} & f{(n-1)/2+2} & \cdots & f_{n-1} \end{bmatrix}. \]

Here, we have assumed n is odd. The matrix H is known as the Hankel matrix associated with the sequence f_0,\ldots,f_{n-1}. Observe that the entry in position ij of H depends only on the sum of the indices i and j, H_{ij} = f_{i+j}. (We use a zero-indexing system to label the rows and columns of H where, for instance, the first row of H is row 0.)

Let’s see how we can interpret the frequency decomposition (2) as a factorization of the Hankel matrix H. We first write out H_{ij} using (2):

(4)   \[H_{ij} = f_{i+j} = \sum_{k=0}^5 d_k z_k^{i+j} = \sum_{k=0}^5 d_k z_k^i \cdot z_k^j. \]

The power z_k^{i+j} was just begging to be factorized as z_k^i\cdot z_k^j, which we did. Equation (4) almost looks like the formula for the product of two matrices with entries z_k^i, so it makes sense to introduce the 6\times (n-1)/2 matrix V with entry V_{ki} = z_k^i. This is a so-called Vandermonde matrix associated with z_0,\ldots,z_5 and has the form

    \[V = \begin{bmatrix}z_0^0 & z_0^1 & z_0^2 & \cdots & z_0^{(n-1)/2} \\z_1^0 & z_1^1 & z_1^2 & \cdots & z_1^{(n-1)/2} \\\vdots & \vdots & \vdots & \ddots & \vdots \\z_5^0 & z_5^1 & z_5^2 & \cdots & z_5^{(n-1)/2}\end{bmatrix}.\]

If we also introduce the 6\times 6 diagonal matrix D = \operatorname{diag}(d_0,d_1,\ldots,d_5), the formula (4) for H can be written as the matrix factorization3In the Vandermonde decomposition H=V^\top D V, the factor V appears transposed even when V is populated with complex numbers! This differs from the usual case in linear algebra where we use the conjugate transpose rather than the ordinary transpose when working with complex matrices. As a related issue, observe that if at least one of the measurements f_0,\ldots,f_{n-1} is a (non-real) complex number, the Hankel matrix H is symmetric but not Hermitian.

(5)   \[H = V^\top D V. \]

This is the Vandermonde decomposition of the Hankel matrix H, a factorization of H as a product of the transpose of a Vandermonde matrix, a diagonal matrix, and that same Vandermonde matrix.

The Vandermonde decomposition immediately tells us all the information d_0,\ldots,d_5 and z_0,z_1,\ldots,z_5 describing our sampled recording f_0,\ldots,f_{n-1} via (2). Thus, the problem of determining d_0,\ldots,d_5 and z_0,z_1,\ldots,z_5 is equivalent to finding the Vandermonde decomposition (5) of the Hankel matrix H.

Computing the Vandermonde Decomposition: Prony’s Method

Computing the Vandermonde decomposition accurately can be a surprisingly hard task, particularly if the measurements f_0,f_1,\ldots,f_{n-1} are corrupted by even a small amount of measurement error. In view of this, I want to present a very classical way of computing this decomposition (dating back to 1795!) known as Prony’s method. This method is conceptually simple and will be a vehicle to continue exploring frequency finding and its connection with Hankel matrices. It remains in use, though it’s accuracy may be significantly worse compared to other methods.

As a first step to deriving Prony’s method, let’s reformulate the frequency finding problem in a different way. Sums of cosines like the ones in our expression (1) for the function f(\cdot) often appear as the solution to a (linear) ordinary differential equation (ODE). This means that one way we could find the frequencies comprising f(\cdot) would be to find a differential equation which f(\cdot) satisfies. Together with the initial condition f(0), determining all the frequencies would be very straightforward.

Since we only have access to samples f_0, f_1,\ldots,f_{n-1} of f(\cdot) at regular time intervals, we will instead look for the “discrete-time” analog of a linear ODE, a linear recurrence relation. This is an expression of the form

(6)   \[f_m = c_{k-1} f_{m-1} + \cdots + c_1f_{m-k+1}+ c_0 f_{m-k} \quad \textnormal{for every} \quad m = k,\,{k+1},\,{k+2},\cdots. \]

In our case, we’ll have k = 6 because there are six terms in the formula (2) for f_j. Together with initial conditions f_0,f_1,\ldots,f_{k-1}, such a recurrence will allow us to determine the parameters z_0,\ldots,z_5 and d_0,\ldots,d_5 in our formula (2) for our sampled recordings f_0,\ldots,f_{n-1} and hence also allow us to compute the Vandermonde decomposition (5).

Observe that the recurrence (6) is a linear equation in the variables c_0,\ldots,c_5. A very good rule of thumb in applied mathematics is to always write down linear equations in matrix–vector notation in see how it looks. Doing this, we obtain

(7)   \[\begin{bmatrix} f_6 \\ f_7 \\ \vdots \\ f_{n-1} \end{bmatrix} = \underbrace{\begin{bmatrix} f_0 & f_1 & f_2 & f_3 & f_4 & f_5 \\ f_1 & f_2 & f_3 & f_4 & f_5 & f_6 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ f_{n-7} & f_{n-6} & f_{n-5} & f_{n-4} & f_{n-3} & f_{n-2} \end{bmatrix}}_{=F}\begin{bmatrix} c_0 \\ c_1 \\ \vdots \\ c_5 \end{bmatrix}. \]

Observe that the matrix on the right-hand side of this equation is also a Hankel matrix (like H in (3)) formed from the samples f_0,\ldots,f_{n-1}. Call this Hankel matrix F. Unlike H in (3), F is rectangular. If n is much larger than 6, F will be tall, possessing many more rows than columns. We assume n > 12 going forward.4n=12 would also be fine for our purposes, but we assume n > 12 to illustrate this highly typical case.

Let’s write (7) a little more compactly as

(8)   \[f_{6 \, :\, n-1} = F c, \]

where we’ve introduced f_{6\,:\, n-1} for the vector on the left-hand side of (7) and collected the recurrence coefficients c_0,\ldots,c_5 into a vector c. For a typical system of linear equations like (8), we would predict the system to have no solution c: Because F has more rows than columns (if n > 12), the system equations (8) has more equations than unknowns. Fortunately, we are not in the typical case. Despite the fact that we have more equations than unknowns, the linear equations (8) have a unique solution c.5This solution can be computed by solving the 6\times 6 system of linear equations \begin{bmatrix} f_6 \\ f_7 \\ \vdots \\ f_{11} \end{bmatrix} = \begin{bmatrix} f_0 & f_1 & \cdots & f_5 \\ f_1 & f_2 & \cdots & f_6 \\ \vdots & \vdots & \ddots & \vdots \\ f_{5} & f_6 & \cdots & f_{11} \end{bmatrix}\begin{bmatrix} c_0 \\ c_1 \\ \vdots \\ c_5\end{bmatrix}. In particular, the matrix on the right-hand side of this equation is guaranteed to be nonsingular under our assumptions. Using the Vandermonde decomposition, can you see why? The existence of a unique solution is a consequence of the fact that the samples f_0,\ldots,f_{n-1} satisfy the formula (2). As a fun exercise, you might want to verify the existence of a unique c satisfying (8)!

As a quick aside, if the measurements f_0,\ldots,f_{n-1} are corrupted by small measurement errors, then the equations (8) will usually not possess a solution. In this case, it would be appropriate to find the least squares solution to equation (8) as a way of mitigating these errors.

Hurrah! We’ve found the coefficients c_0,\ldots,c_5 providing a recurrence relation (6) for our measurements f_0,\ldots,f_{n-1}. All that is left is to find the parameters z_0,\ldots,z_5 and d_0,\ldots,d_5 in our signal formula (2) and the Vandermonde decomposition (5). Fortunately, this is just a standard computation for linear recurrence relations, nicely paralleling the solution of (homogenous) linear ODEs by means of the so-called “characteristic equation”. I’ll go through fairly quickly since this material is well-explained elsewhere on the internet (like Wikipedia). Let’s guess that our recurrence (6) has a solution of the form f_j = z^j; we seek to find all complex numbers z for which this is a bonafide solution. Plugging this solution into the formula (6) for f_6 gives

(9)   \[z^6 = c_0 z^5 + c_1 z^4 + \cdots + c_6. \]

This is the so-called characteristic equation for the recurrence (6). As a single-variable polynomial equation of degree six, it has six complex solutions z_0,z_1,\ldots,z_5. These numbers z_0,z_1,\ldots,z_5 are precisely those numbers which appear in the sequence formula (2) and the Vandermonde decomposition (5).

Finally, we need to compute the coefficients d_0,\ldots,d_5. But this is easy. Observe that the formula (2) provides the following system of linear equations for d_0,\ldots,d_5:

(10)   \[\begin{bmatrix}f_0 \\ f_1 \\ \vdots \\ f_{n-1}\end{bmatrix} = \begin{bmatrix}1 & 1 & \cdots & 1 \\ z_0 & z_1 & \cdots & z_5 \\ \vdots & \vdots & \ddots & \vdots \\ z_0^{n-1} & z_1^{n-1} & \cdots & z_5^{n-1}\end{bmatrix} \begin{bmatrix}d_0 \\ d_1 \\ \vdots \\ d_{n-1}.\end{bmatrix}. \]

Again, this system of equations will have a unique solution if the measurements f_0,\ldots,f_{n-1} are uncorrupted by errors (and can be solved in the least squares sense if corrupted). This gives d_0,\ldots,d_5, completing our goal of computing the parameters in the formula (2) or, equivalently, finding the Vandermonde decomposition (5).

We have accomplished our goal of computing the Vandermonde decomposition. The approach by which we did so is known as Prony’s method, as mentioned in the introduction to this section. As suggested, this method may not always give high-accuracy results. There are two obvious culprits that jump out about why this is the case. Prony’s method requires solving for the roots of the polynomial equation (9) expressed “in the monomial basis” and solving a system of linear equations (10) with a (transposed) Vandermonde matrix. Both of these problems can be notoriously ill-conditioned and thus challenging to solve accurately and may require the measurements f_0,\ldots,f_{n-1} to be done to very high accuracy. Notwithstanding this, Prony’s method does useful results in some cases and forms the basis for potentially more accurate methods, such as those involving generalized eigenvalue problems.

Separating Signals: Extending the Vandermonde Decomposition to Tensors

In our discussion of the frequency identification problem, the Vandermonde decomposition (5) has effectively been an equivalent way of showing the samples f_j are a combination of exponentials z^j. So far, the benefits of the matrix factorization perspective have yet to really reveal themselves.

So what are the benefits of the Vandermonde decompostions? A couple of nice observations related to the Vandermonde decomposition and the “Hankelization” of the signals H have already been lurking in the background. For instance, the rank of the Hankel matrix H is the number of frequency components z_k needed to describe the samples and the representation of the samples as a mixture of exponentials is uniquely determined only if the matrix H does not have full rank; I have a little more to say about this at the very end. There are also benefits to certain computational problems; one can use Vandermonde decompositions to compute super high accuracy singular value decompositions of Hankel matrices.

The power of the Vandermonde decomposition really starts to shine when we go beyond the basic frequency finding problem we discussed by introducing more signals. Suppose now there are three short recordings f^{(1)}(\cdot), f^{(2)}(\cdot), and f^{(3)}(\cdot). (Here, the superscript denotes an index rather than differentiation.) Each signal is a weighted mixture of three sources s^{(1)}(\cdot), s^{(2)}(\cdot), and s^{(3)}(\cdot), each of which plays a musical chord of three notes (thus representable as a sum of cosines as in (1)). One can think of the sources of being produced three different musical instruments at different places in a room and the recordings f^{(1)}(\cdot), f^{(2)}(\cdot), and f^{(3)}(\cdot) being taken from different microphones in the room.6This scenario of instruments and microphones ignores the finite propagation speed of sound, which also would introduce time delays in the sources in the recorded signals. We effectively treat the speed of sound as being instantaneous. Our goal is now not just to identify the musical notes in the recordings but also to identify how to assign those notes to reconstruct the source signals s^{(1)}(\cdot), s^{(2)}(\cdot), and s^{(3)}(\cdot).

Taking inspiration from earlier, we record samples f_0^{(\ell)},\ldots,f_{n-1}^{(\ell)} for each recording \ell = 1,2,3 and form each collection of samples into a Hankel matrix

    \[H^{(\ell)} = \begin{bmatrix} f_0^{(\ell)} & f_1^{(\ell)} & f_2^{(\ell)} & \cdots & f_{(n-1)/2}^{(\ell)} \\f_1^{(\ell)} & f_2^{(\ell)} & f_3^{(\ell)} & \cdots & f_{(n-1)/2+1}^{(\ell)} \\f_2^{(\ell)} & f_3^{(\ell)} & f_4^{(\ell)} & \cdots & f_{(n-1)/2+2}^{(\ell)} \\\vdots & \vdots & \vdots & \ddots & \vdots \\f_{(n-1)/2}^{(\ell)} & f_{(n-1)/2+1}^{(\ell)} & f_{(n-1)/2+2}^{(\ell)} & \cdots & f_{n-1}^{(\ell)} \end{bmatrix}.\]

Here comes the crazy part: Stack the Hankelized recordings H^{(1)}, H^{(2)}, and H^{(3)} as slices of a tensor \mathcal{H}. A tensor, in this context, just means a multidimensional array of numbers. Just as a vector is a one-dimensional array and a matrix is a two-dimensional array, a tensor could have any number of dimensions. In our case, we need just three. If we use a MATLAB-esque indexing notation, \mathcal{H} is a (n-1)/2\times (n-1)/2 \times 3 array given by

    \[\mathcal{H}(:,:,\ell) = H^{(\ell)} \quad \textnormal{for} \quad \ell=1,2,3.\]

The remarkable thing is that the source signals can be determined (under appropriate conditions) by computing a special kind of Vandermonde decomposition of the tensor \mathcal{H}! (Specifically, the required decomposition is a Vandermonde-structured (L_r,L_r,1)-block term decomposition of the tensor \mathcal{H}.) Even more cool, this decomposition can be computed using general-purpose software like Tensorlab.

If this sounds interesting, I would encourage you to check out my recently published paper (L_r,L_r,1)-decompositions, sparse component analysis, and the blind separation of sums of exponentials, joint work with Nithin Govindajaran and Lieven De Lathauwer and recently published in the SIAM Journal on Matrix Analysis and Applications. In the paper, we explain what this (L_r,L_r,1)-decomposition is and how applying it to \mathcal{H} can be used to separate mixtures of exponentials signals from the resulting Vandermonde structure, an idea originating in the work of De Lathauewer. A very important question for these signal separation problems is that of uniqueness. Given the three sampled recordings (comprising the tensor \mathcal{H}), is there just one way of unscrambling the mixtures into different sources or multiple? If there are multiple, then we might have possibly computed the wrong one. If there is just a single unscrambling, though, then we’ve done our job and unmixed the scrambled signals. The uniqueness of these tensor decompositions is fairly complicated math, and we survey existing results and prove new ones in this paper.7One of our main technical contributions is a new notion of uniqueness of (L_r,L_r,1)-decompositions which we believe is nicely adapted to the signal separation context. Specfically, we prove mathematized versions of the statement “if the source signals are sufficiently different from each others and the measurements of sufficiently high quality, then the signals can uniquely be separated”.

Conclusions, Loose Ends, and Extensions

The central idea that we’ve been discussing is how it can be useful to convert between a sequence of observations f_0,f_1,\ldots,f_{n-1} and a special matricization of this sequence into a Hankel matrix (either square, as in (3), or rectangular, as in (7)). By manipulating the Hankel matrix, say, by computing its Vandermonde decomposition (5), we learn something about the original signal, namely a representation of the form (2).

This is a powerful idea which appears implicitly or explicitly throughout various subfields of mathematics, engineering, and computation. As with many other useful ideas, this paradigm admits many natural generalizations and extensions. We saw one already in this post, where we extended the Vandermonde decomposition to the realm of tensors to solve signal separation problems. To end this post, I’ll place a few breadcrumbs at the beginning of a few of the trails of these generalizations for any curious to learn more, wrapping up a few loose ends on the way.

Is the Vandermonde Decomposition Unique?
A natural question is whether the Vandermonde decomposition (5) is unique. That is, is it possible that there exists two Vandermonde decompositions

    \[H = V^\top DV = \tilde{V}^\top \tilde{D} \tilde{V}\]

of the same (square) Hankel matrix H? This is equivalent to whether the frequency components z_0,z_1,\ldots can be uniquely determined from the measurements f_0,f_1,\ldots,f_{n-1}.

Fortunately, the Vandermonde decomposition is unique if (and only if) the matrix H is a rank-deficient matrix. Let’s unpack this a little bit. (For those who could use a refresher on rank, I have a blog post on precisely this topic.) Note that the Vandermonde decomposition is a rank factorization8Rank factorizations are sometimes referred to as “rank-revealing factorizations”. I discuss my dispreference for this term in my blog post on low-rank matrices. since V has \rank H rows, V has full (row) rank, and D is invertible. This means that if take enough samples f_0,\ldots,f_{n-1} of a function f(\cdot) which is a (finite) combinations of exponentials, the matrix H will be rank-deficient and the Vandermonde decomposition unique.9The uniqueness of the Vandermonde decomposition can be proven by showing that, in our construction by Prony’s method, the c‘s, z‘s, and d‘s are all uniquely determined. If too few samples are taken, then H does not contain enough information to determine the frequency components of the signal f(\cdot) and thus the Vandermonde decomposition is non-unique.

Does Every Hankel Matrix Have a Vandermonde Decomposition?
This post has exclusively focused on a situation where we are provided with sequence we know to be represented as a mixture of exponentials (i.e., taking the form (2)) from which the existence of the Vandermonde decomposition (5) follows immediately. What if we didn’t know this were the case, and we were just given a (square) Hankel matrix H. Is H guaranteed to possess a Vandermonde decomposition of the form (5)?

Unfortunately, the answer is no; there exist Hankel matrices which do not possess a Vandermonde decomposition. The issue is related to the fact that the appropriate characteristic equation (analogous to (9)) might possess repeated roots, making the solutions to the recurrence (6) not just take the form z^j but also jz^j and perhaps j^2z^j, j^3z^j, etc.

Are There Cases When the Vandermonde Decomposition is Guaranteed To Exist?
There is one natural case when a (square) Hankel matrix is guaranteed to possess a Vandermonde decomposition, namely when the matrix is nonsingular/invertible/full-rank. Despite this being a widely circulated fact, I am unaware of a simple proof for why this is the case. Unfortunately, there is not just one but infinitely many Vandermonde decompositions for a nonsingular Hankel matrix, suggesting these decompositions are not useful for the frequency finding problem that motivated this post.
What If My Hankel Matrix Does Not Possess a Vandermonde Decomposition?
As discussed above, a Hankel matrix may fail to have a Vandermonde decomposition if the characteristic equation (a la (9)) has repeated roots. This is very much analogous to the case of a non-diagonalizable matrix for which the characteristic polynomial has repeated roots. In this case, while diagonalization is not possible, one can “almost-diagonalize” the matrix by reducing it to its Jordan normal form. In total analogy, every Hankel matrix can be “almost Vandermonde decomposed” into a confluent Vandermonde decomposition (a discovery that appears to have been made independently several times). I will leave these links to discuss the exact nature of this decomposition, though I warn any potential reader that these resources introduce the decomposition first for Hankel matrices with infinitely many rows and columns before considering the finite case as we have. One is warned that while the Vandermonde decomposition is always a rank decomposition, the confluent Vandermonde decomposition is not guaranteed to be one.10Rather, the confluent Vandermonde decomposition is a rank decomposition for an infinite extension of a finite Hankel matrix. Consider the Hankel matrix H = \begin{bmatrix} 1 & 0 & \cdots & 0 & 0 \\ 0 & 0 & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 0 & 0 \\ 0 & 0 & \cdots & 0 & 1 \end{bmatrix}.This matrix has rank-two but no rank-two confluent Vandermonde decomposition. The issue is that when extended to an infinite Hankel matrix \begin{bmatrix} 1 & \cdots & 0 & 0 & &\cdots & 1\\ \vdots & \ddots & \vdots &\vdots\\ 0 & \cdots & 0 & 0 & 1\\ 0 & \cdots & 0 & 1 \\ & & 1 & & \ddots \\ \vdots\\ 1\end{bmatrix}, this (infinite!) matrix has a rank exceeding the size of the original Hankel matrix H.
The Toeplitz Vandermonde Decomposition
Just as it proved useful to arrange samples f_0,\ldots, f_{n-1} into a Hankel matrix, it can also be useful to form them into a Toeplitz matrix

    \[T = \begin{bmatrix} f_{(n-1)/2} & f_{(n-1)/2+1} & f_{(n-1)/2+2} & \cdots & f_{n-1} \\\\f_{(n-1)/2-1} & f_{(n-1)/2} & f_{(n-1)/2+1} & \cdots & f_{n-2} \\\\f_{(n-1)/2-2} & f_{(n-1)/2-1} & f_{(n-1)/2} & \cdots & f_{n-3} \\\\\vdots & \vdots & \vdots & \ddots & \vdots \\\\ f_0 & f_1 & f_2 & \cdots & f_{(n-1)/2} \end{bmatrix}.\]

The Toeplitz matrix T has the appealing propery that the matrix–vector product Tx computes a (discrete) convolution of the sampled signal f with the sampled signal x which has all sorts of uses in signal processing and related fields.11I discuss Toeplitz matrices and a fast algorithm to compute the product Tx using the fast Fourier transform more in a blog post I wrote about the subject.

One can interconvert between Hankel and Toeplitz matrices by reversing the order of the rows. As such, to the extent to which Hankel matrices possess Vandermonde decompositions (with all the asterisks and fine print just discussed), Toeplitz matrices do as well but with the rows of the first factor reversed:

    \[T = \operatorname{ReversedRows}(V^\top) \cdot DV.\]

There is a special and important case where more is true. If a Toeplitz matrix is (Hermitian) positive semidefinite, then T always possesses a Vandermonde decomposition of the form

    \[T = V^* D V,\]

where V is a Vandermonde matrix associated with parameters z_0,z_1,\ldots,z_{k-1} which are complex numbers of absolute value one and D is a diagonal matrix with real positive entries.12The keen-eyed reader will note that V appears conjugate transposed in this formula rather than transposed as in the Hankel Vandermonde decomposition (5). This Vandermonde decomposition is unique if and only if T is rank-deficient. Positive semidefinite Toeplitz matrices are important as they occur as autocorrelation matrices which effectively describe the similarity between a signal and different shifts of itself in time. Autocorrelation matrices appear under different names in everything from signal processing to random processes to near-term quantum algorithms (a topic near and dear to my heart). A delightfully simple and linear algebraic derivation of this result is given by Yang and Xie (see Theorem 1).13Unfortunately, Yang and Xie incorrectly claim that every Toeplitz matrix possesses a rank factorization Vandermonde decomposition of the form T = V^* D V where V is a Vandermonde matrix populated with entries on the unit circle and D is a diagonal matrix of possibly *complex* entries. This claim is disproven by the example \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}. This decomposition can be generalized to infinite positive semidefinite Toeplitz matrices (appropriately defined).14Specifically, one can show that an infinite positive semidefinite Toeplitz matrix (appropriately defined) also has a “Vandermonde decomposition” (appropriately defined). This result is often known as Herglotz’s theorem and is generalized by the Bochner–Weil theorem.

The Elegant Geometry of Generalized Eigenvalue Perturbation Theory

In this post, I want to discuss a beautiful and simple geometric picture of the perturbation theory of definite generalized eigenvalue problems. As a culmination, we’ll see a taste of the beautiful perturbation theory of Mathias and Li, which appears to be not widely known in virtue of only being explained in a technical report. Perturbation theory for the generalized eigenvalue problem is a bit of a niche subject, but I hope you will stick around for some elegant arguments. In addition to explaining the mathematics, I hope this post serves as an allegory for the importance of having the right way of thinking about a problem; often, the solution to a seemingly unsolvable problem becomes almost self-evident when one has the right perspective.

What is a Generalized Eigenvalue Problem?

This post is about the definite generalized eigenvalue problem, so it’s probably worth spending a few words talking about what generalized eigenvalue problems are and why you would want to solve them. Slightly simplifying some technicalities, a generalized eigenvalue problem consists of finding nonzero vectors x and a (possibly complex) numbers \lambda such that Ax = \lambda \, Bx.1In an unfortunate choice of naming, there is actually a completely different sense in which it makes sense to talk about generalized eigenvectors, in the context of the Jordan normal form for standard eigenvalue problems. The vector x is called an eigenvector and \lambda its eigenvalue. For our purposes, A and B will be real symmetric (or even complex Hermitian) matrices; one can also consider generalized eigenvalue problemss for nonsymmetric and even non-square matrices A and B, but the symmetric case covers many applications of practical interest. The generalized eigenvalue problem is so-named because it generalizes the standard eigenvalue problem Ax = \lambda x, which is a special case of the generalized eigenvalue problem with B = I.2One can also further generalize the generalized eigenvalue problem to polynomial and nonlinear eigenvalue problems.

Why might we want so solve a generalized eigenvalue problem? The applications are numerous (e.g., in chemistry, quantum computation, systems and control theory, etc.). My interest in perturbation theory for generalized eigenvalue problems arose in the analysis of a quantum algorithm for eigenvalue problems in chemistry, and the theory discussed in this article played a big role in that analysis. To give an application which is more easy to communicate than the quantum computation application which motivated my own interest, let’s discuss an application in classical mechanics.

The Lagrangian formalism is a way of reformulating Newton’s laws of motion in a general coordinate system.3The benefits of the Lagrangian framework are far deeper than working in generalized coordinate systems, but this is beyond the scope of our discussion and mostly beyond the scope of what I am knowledgeable enough to write meaningfully about. If q denotes a vector of generalized coordinates describing our system and \dot{q} denotes q‘s time derivative, then the time evolution of a system with Lagrangian functional L(q,\dot{q}) are given by the Euler–Lagrange equations \tfrac{d}{dt} \nabla_{\dot{q}} L = \nabla_q L. If we choose q to represent the deviation of our system from equilibrium,4That is, in our generalized coordinate system, q = 0 is a (static) equilibrium configuration for which \ddot{q} = 0 whenever q = 0 and \dot{q} = 0. then our Lagrangian is well-approximated by it’s second order Taylor series:

    \begin{equation*} L(q,\dot{q}) \approx L_0 + \frac{1}{2} q^\top A q - \frac{1}{2} \dot{q}^\top B\dot{q}. \end{equation*}

By the Euler–Lagrange equations, the equations of motion for small deviations from this equillibrium point are described by

    \begin{equation*} B\ddot{q} = -Aq. \end{equation*}

A fundamental set of solutions of this system of differential equations is given by \mathrm{e}^{\pm \sqrt{-\lambda} \, t} x, where \lambda and x are the generalized eigenvalues and eigenvectors of the pair (A,B).5That is, all solutions to B\ddot{q} = -Aq can be written (uniquely) as linear combinations of solutions of the form \mathrm{e}^{\pm \sqrt{-\lambda} \, t} x. In particular, if all the generalized eigenvalues are positive, then the equillibrium is stable and the square roots of the eigenvalues represent the modes of vibration. In the most simple mechanical systems, such as masses in one dimension connected by springs with the natural coordinate system, the matrix B is diagonal with diagonal entries equal to the different masses. In even slightly more complicated “freshman physics” problems, it is quite easy to find examples where, in the natural coordinate system, the matrix B is nondiagonal.6Almost always, the matrix B is positive definite. As this example shows, generalized eigenvalue problems aren’t crazy weird things since they emerge as natural descriptions of simple mechanical systems like coupled pendulums.

One reason generalized eigenvalue problems aren’t more well-known is that one can easily reduce a generalized eigenvalue problem into a standard one. If the matrix B is invertible, then the generalized eigenvalues of (A,B) are just the eigenvalues of the matrix B^{-1}A. For several reasons, this is a less-than-appealing way of reducing a generalized eigenvalue problem to a standard eigenvalue problem. A better way, appropriate when A and B are both symmetric and B is positive definite, is to reduce the generalized eigenvalue problem for (A,B) to the symmetrically reduced matrix B^{-1/2}AB^{-1/2}, which also possesses the same eigenvalues as (A,B). In particular, the matrix B^{-1/2}AB^{-1/2} remains symmetric, which shows that (A,B) has real eigenvalues by the spectral theorem. In the mechanical context, one can think of this reformulation as a change of coordinate system in which the “mass matrix” B becomes the identity matrix I.

There are several good reasons to not simply reduce a generalized eigenvalue problem to a standard one, and perturbation theory gives a particular good reason. In order for us to change coordinates to change the B matrix into an identity matrix, we must first know the B matrix. If I presented you with an elaborate mechanical system which you wanted to study, you would need to perform measurements to determine the A and B matrices. But all measurements are imperfect and the entries of A and B are inevitably corrupted by measurement errors. In the presence of these measurement errors, we must give up on computing the normal modes of vibration perfectly; we must content ourselves with computing the normal modes of vibration plus-or-minus some error term we hope we can be assured is small if our measurement errors are small. In this setting, reducing the problem to B^{-1/2}AB^{-1/2} seems less appealing, as I have to understand how the measurement errors in A and B are amplified in computing the triple product B^{-1/2}AB^{-1/2}. This also suggests that computing B^{-1/2}AB^{-1/2} may be a poor algorithmic strategy in practice: if the matrix B is ill-conditioned, there might be a great deal of error amplification in the computed product B^{-1/2}AB^{-1/2}. One might hope that one might be able to devise algorithms with better numerical stability properties if we don’t reduce the matrix pair (A,B) to a single matrix. This is not to say that reducing a generalized eigenvalue problem to a standard one isn’t a useful tool—it definitely is. However, it is not something one should do reflexively. Sometimes, a generalized eigenvalue problem is best left as is and analyzed in its native form.

The rest of this post will focus on the question if A and B are real symmetric matrices (satisfying a definiteness condition, to be elaborated upon below), how do the eigenvalues of the pair (A+E,B+F) compare to those of (A,B), where E and F are small real symmetric perturbations? In fact, it shall be no additional toil to handle the complex Hermitian case as well while we’re at it, so we shall do so. (Recall that a Hermitian matrix A satisfies A^* = A, where (\cdot)^* is the conjugate transpose. Since the complex conjugate does not change a real number, a real Hermitian matrix is necesarily symmetric A^* = A^\top = A.) For the remainder of this post, let A, B, E, and F be Hermitian matrices of the same size. Let \widetilde{A} := A+E and \widetilde{B} := B + F denote the perturbations.

Symmetric Treatment

As I mentioned at the top of this post, our mission will really be to find the right way of thinking about perturbation theory for the generalized eigenvalue problem, after which the theory will follow much more directly than if we were to take a frontal assault on the problem. As we go, we shall collect nuggets of insight, each of which I hope will follow quite naturally from the last. When we find such an insight, we shall display it on its own line.

The first insight is that we can think of the pair A and B interchangeably. If \lambda is a nonzero eigenvalue of the pair (A,B), satisfying Ax = \lambda\, Bx, then Bx = \lambda^{-1} \, Ax. That is, \lambda^{-1} is an eigenvalue of the pair (B,A). Lack of symmetry is an ugly feature in a mathematical theory, so we seek to smooth it out. After thinking a little bit, notice that we can phrase the generalized eigenvalue condition symmetrically as \beta \, Ax = \alpha \, Bx with the associated eigenvalue being given by \lambda = \alpha/\beta. This observation may seem trivial at first, but let us collect it for good measure.

Treat A and B symmetrically by writing the eigenvalue as \lambda = \alpha/\beta with \beta\, Ax = \alpha\, Bx.

Before proceeding, let’s ask a question that, in our new framing, becomes quite natural: what happens when \beta = 0? The case \beta = 0 is problematic because it leads to a division by zero in the expression \lambda = \alpha/\beta. However, if we have \alpha \ne 0, this expression still makes sense: we’ve found a vector x for which Ax \ne 0 but Bx = 0. It makes sense to consider x still an eigenvector of (A,B) with eigenvalue \alpha / 0 = \infty! Dividing by zero should justifiably make one squeemish, but it really is quite natural in the case to treat x as a genuine eigenvector with eigenvalue \infty.

Things get even worse if we find a vector x for which Ax = Bx = 0. Then, any (\alpha,\beta) can reasonably considered an eigenvalue of (A,B) since \alpha \, Bx = 0 = \beta\, Ax. In such a case, all complex numbers are simultaneously eigenvalues of (A,B), in which case we call (A,B) singular.7More precisely, a pair (A,B) is singular if the determinant \det(tB - A) is identically zero for all t \in \mathbb{C}. For the generalized eigenvalue problem to make sense for a pair (A,B), it is natural to require that (A,B) not be singular. In fact, we shall assume an even stronger “definiteness” condition which ensures that (A,B) has only real (or infinite) eigenvalues. Let us return to this question of definiteness in a moment and for now assume that (A,B) is not singular and possesses real eigenvalues.

With this small aside taken care of, let us return to the main thread. By modeling eigenvalues as pairs (\alpha,\beta), we’ve traded one ugliness for another. While reformulating the eigenvalue as a pair (\alpha,\beta) treats A and B symmetrically, it also adds an additional indeterminacy, scale. For instance, if (\alpha,\beta) is an eigenvalue of (A,B), then so is (10\alpha,10\beta). Thus, it’s better not to think of (\alpha,\beta) so much as a pair of numbers together with all of its possible scalings.8Projective space provides a natural framework for studying such vectors up to scale indeterminacy. For reasons that shall hopefully become more clear as we go forward, it will be helpful to only consider all the possible positive scalings of (\alpha,\beta)—e.g., all (t\alpha,t\beta) for t > 0. Geometrically, the set of all positive scalings of a point in two-dimensional space is precisely just a ray emanating from the origin.

Represent eigenvalue pairs (\alpha,\beta) as rays emanating from the origin to account for scale ambiguity.

Now comes a standard eigenvalue trick. It’s something you would never think to do originally, but once you see it once or twice you learn to try it as a matter of habit. The trick: multiply the eigenvalue-eigenvector relation by the (conjugate) transpose of x:9For another example of the trick, try applying it to the standard eigenvalue problem Ax = \lambda x. Multiplying by x^* and rearranging gives \lambda = x^*Ax/x^*x—the eigenvalue \lambda is equal to the expression x^*Ax/x^*x, which is so important it is given a name: the Rayleigh quotient. In fact, the largest and smallest eigenvalues of A can be found by maximizing and minimizing the Rayleigh quotient.

    \begin{equation*} \beta \, Ax = \alpha \, Bx \implies \beta \, x^*Ax = \alpha \, x^*Bx \implies \frac{\alpha}{\beta} = \frac{x^*Ax}{x^*Bx}. \end{equation*}

The above equation is highly suggestive: since \alpha and \beta are only determined up to a scaling factor, it shows we can take \alpha = x^*Ax and \beta = x^*Bx. And by different scalings of the eigenvector x, we can scale x^*Ax = \alpha and x^*Bx = \beta by any positive factor we want. (This retroactively shows why it makes sense to only consider positive scalings of \alpha and \beta.10To make this point more carefully, we shall make great use of the identification between pairs (\alpha,\beta) and the pair of quadratic forms (x^*Ax,x^*Bx). Thus, even though (\alpha,\beta) and (-\alpha,-\beta) lead to equivalent eigenvalues since \alpha/\beta = (-\alpha)/(-\beta), (\alpha,\beta) and (-\alpha,-\beta) don’t necessarily both arise from a pair of quadratic forms: if (\alpha,\beta) = (x^* Ax,x^* Bx), this does not mean there exists y such that (y^* Ay,y^* By) = (-\alpha,-\beta). Therefore, we only consider (\alpha,\beta) equivalent to (t\alpha,t\beta) if t > 0.) The expression x^*Ax is so important that we give it a name: the quadratic form (associated with A and evaluated at x).

The eigenvalue pair (\alpha,\beta) can be taken equal to the pair of quadratic forms (x^*Ax,x^*Bx).

Complexifying Things

Now comes another standard mathematical trick: represent points in two-dimensional space by complex numbers. In particular, we identify the pair (\alpha,\beta) with the complex number \alpha + \mathrm{i}\beta.11Recall that we are assuming that \alpha / \beta is real, so we can pick a scaling in which both \alpha and \beta are real numbers. Assume we have done this. Similar to the previous trick, it’s not fully clear why this will pay off, but let’s note it as an insight.

Identify the pair (\alpha,\beta) with the complex number \alpha + \mathrm{i}\beta.

Now, we combine all the previous observations. The eigenvalue \lambda = x^*Ax / x^*Bx is best thought of as a pair (\alpha,\beta) which, up to scale, can be taken to be \alpha = x^*Ax and \beta = x^*Bx. But then we represent (\alpha,\beta) as the complex number

    \begin{equation*} \alpha + \mathrm{i} \beta = x^*Ax + \mathrm{i} x^*Bx = x^*(A + \mathrm{i} B) x. \end{equation*}

Let’s stop for a moment and appreciate how far we’ve come. The generalized eigenvalue problem Ax = \lambda\, Bx is associated with the expression x^*(A+\mathrm{i} B)x.If we just went straight from one to the other, this reduction would appear like some crazy stroke of inspiration: why would I ever think to write down x^*(A+\mathrm{i} B)x? However, just following our nose lead by a desire to treat A and B symmetrically and applying a couple standard tricks, this expression appears naturally. The expression x^*(A+\mathrm{i} B)x will be very useful to us because it is linear in A and B, and thus for the perturbed problem (\widetilde{A},\widetilde{B}) = (A+E,B+F), we have that x^*(\widetilde{A}+\mathrm{i} \widetilde{B})x = x^*(A+\mathrm{i} B)x + x^*(E+\mathrm{i} F)x: consequently, x^*(\widetilde{A}+\mathrm{i} \widetilde{B})x is a small perturbation of x^*(A+\mathrm{i} B)x. This observation will be very useful to us.

If x is the eigenvector, then the complex number \alpha + \mathrm{i} \beta is x^*(A+\mathrm{i} B)x.

Definiteness and the Crawford Number

With these insights in hand, we can now return to the point we left earlier about what it means for a generalized eigenvalue problem to be “definite”. We know that if there exists a vector x for which Ax = Bx = 0, then the problem is singular. If we multiply by x^*, we see that this means that x^*Ax = x^*Bx = 0 as well and thus x^*(A+\mathrm{i}B)x = 0. It is thus quite natural to assume the following definiteness condition:

The pair (A,B) is said to be definite if x^*(A+\mathrm{i}B)x \ne 0 for all complex nonzero vectors x.

A definite problem is guaranteed to be not singular, but the reverse is not necessarily true; one can easily find pairs (A,B) which are not definite and also not singular.12For example, consider A = B = \operatorname{diag}(1,-1). (A,B) is not definite since x^*Ax = x^*Bx = 0 for x = (1,1). However, (A,B) is not singular; the only eigenvalue of the pair (A,B) is 1 and \det(tB-A) = -(t-1) is not identically zero.. (Note x^*Ax = x^*Bx = 0 does not imply Ax = Bx = 0 unless A and B are both positive (or negative) semidefinite.)

The “natural” symmetric condition for (A,B) to be “definite” is for x^*(A+\mathrm{i} B)x \ne 0 for all vectors x.

Since the expression x^*(A+\mathrm{i}B)x is just scaled by a positive factor by scaling the vector x, it is sufficient to check the definiteness condition x^*(A+\mathrm{i}B)x \ne 0 for only complex unit vectors x. This leads naturally to a quantitative characterization of the degree of definiteness of a pair (A,B):

The Crawford number13The name Crawford number was coined by G. W. Stewart in 1979 in recognition of Crawford’s pioneering work on the perturbation theory of the definite generalized eigenvalue problem. c(A,B) of a pair (A,B) is the minimum value of |x^*(A+\mathrm{i}B)x| = \sqrt{(x^*Ax)^2 + (x^*Bx)^2} over all complex unit vectors x.

The Crawford number naturally quantifies the degree of definiteness.14In fact, it has been shown that the Crawford number is, in a sense, the distance from a definite matrix pair (A,B) to a pair which is not simultaneously diagonalizable by congruence. A problem which has a large Crawford number (relative to a perturbation) will remain definite after perturbation, whereas the pair may become indefinite if the size of the perturbation exceeds the Crawford number. Geometrically, the Crawford number has the following interpretation: x^*(A+\mathrm{i}B)x must lie on or outside the circle of radius c(A,B) centered at 0 for all (complex) unit vectors x.

The “degree of definiteness” can be quantified by the Crawford number c(A,B) := \min_{\|x\|=1} x^*(A+iB)x.

Now comes another step in our journey which is a bit more challenging. For a matrix C (in our case C = A+\mathrm{i}B), the set of complex numbers x^*Cx for all unit vectors x has been the subject of considerable study. In fact, this set has a name

The field of values of a matrix C is the set W(C) := \{ x^*Cx : x\in\mathbb{C}^n, \: \|x\| = 1\}.

In particular, the Crawford number is just the absolute value of the closest complex number in the field of values W(A+iB) to zero.

It is a very cool and highly nontrivial fact (called the Toeplitz–Hausdorff Theorem) that the field of values is always a convex set, with every two points in the field of values containing the line segment connecting them. Thus, as a consequence, the field of values W(A+\mathrm{i}B) for a definite matrix pair (A,B) is always on “one side” of the complex plane (in the sense that there exists a line through zero which W(A+\mathrm{i}B) lies strictly on one side of15This is a consequence of the hyperplane separation theorem together with the fact that 0\notin W(A+\mathrm{i}B).).

The numbers x^*(A+iB)x for unit vectors x lie on one half of the complex plane.

The field of values W(A+\mathrm{i}B) lies outside the circle of radius c(A,B) centered at 0 and thus on one side of the complex plane.

From Eigenvalues to Eigenangles

All of this geometry is lovely, but we need some way of relating it to the eigenvalues. As we observed a while ago, each eigenvalue is best thought of as a ray emanating from the origin, owing to the fact that the pair (\alpha,\beta) can be scaled by an arbitrary positive factor. A ray is naturally associated with an angle, so it is natural to characterize an eigenvalue pair (\alpha,\beta) by the angle describing its arc.

But the angle of a ray is only defined up additions by full rotations (2\pi radians). As such, to associate each ray a unique angle we need to pin down this indeterminacy in some fixed way. Moreover, this indeterminacy should play nice with the field of values W(A+\mathrm{i}B) and the field of values W(\widetilde{A}+\mathrm{i}\widetilde{B}) of the perturbation. But in the last section, we say that each of these field of angles lies (strictly) on one half of the complex plane. Thus, we can find a ray R which does not intersect either field of values!

One possible choice is to measure the angle from this ray. We shall make a slightly different choice which plays better when we treat (\alpha,\beta) as a complex number \alpha + \mathrm{i}\beta. Recall that a number \theta is an argument for \alpha + \mathrm{i}\beta if \alpha + \mathrm{i}\beta = r\mathrm{e}^{i\theta} for some real number r \ge 0. The argument is multi-valued since \theta + 2\pi n is an argument for \alpha+\mathrm{i}\beta as long as \theta is (for all integers n). However, once we exclude our ray R, we can assign each complex number \alpha+\mathrm{i}\beta not on this ray a unique argument which depends continuously on (\alpha,\beta). Denote this “branch” of the argument by \operatorname{arg}. If (\alpha,\beta) represents an eigenvalue \lambda = \alpha/\beta, we call \theta = \arg(\alpha+\mathrm{i}\beta) an eigenangle.

Represent an eigenvalue pair (\alpha,\beta) by its associated eigenangle \theta = \arg(\alpha+i\beta).

How are these eigenangles related to the eigenvalues? It’s now a trigonometry problem:

    \begin{equation*} \lambda = \frac{\alpha}{\beta} = \frac{\mbox{adjacent}}{\mbox{opposite}} = \cot \left( \operatorname{arg}(\alpha+i\beta)). \end{equation*}

The eigenvalues are the cotangents of the eigenangles!

The eigenvalue \lambda = \alpha/\beta is the cotangent of the eigenangle \theta = \arg(\alpha+\mathrm{i}\beta).

Variational Characterization

Now comes another difficulty spike in our line of reasoning, perhaps the largest in our whole deduction. To properly motivate things, let us first review some facts about the standard Hermitian/symmetric eigenvalue problem. The big idea is that eigenvalues can be thought of as the solution to a certain optimization problem. The largest eigenvalue of a Hermitian/symmetric matrix A is given by the maximization problem

    \begin{equation*} \lambda_{\rm max}(A) = \max_{\|x\| = 1} x^*Ax. \end{equation*}

The largest eigenvalue is the maximum of the quadratic form over unit vectors x. What about the other eigenvalues? The answer is not obvious, but the famous Courant–Fischer Theorem shows that the jth largest eigenvalue \lambda_j(A) can be written as the following minimax optimization problem

    \begin{equation*} \lambda_j(A) = \min_{\dim \mathcal{X} = n-j+1} \max_{\substack{x \in \mathcal{X} \\ \|x\| = 1}} x^*Ax. \end{equation*}

The minimum is taken over all subspaces \mathcal{X} of dimension n-j+1 whereas the maximum is taken over all unit vectors x within the subspace \mathcal{X}. Symmetrically, one can also formulate the eigenvalues as a max-min optimization problem

    \begin{equation*} \lambda_j(A) = \max_{\dim \mathcal{X} = j} \min_{\substack{x \in \mathcal{X} \\ \|x\| = 1}} x^*Ax. \end{equation*}

These variational/minimax characterizations of the eigenvalues of a Hermitian/symmetric matrix are essential to perturbation theory for Hermitian/symmetric eigenvalue problems, so it is only natural to go looking for a variational characterization of the generalized eigenvalue problem. There is one natural way of doing this that works for B positive definite: specifically, one can show that

    \begin{equation*} \lambda_j(A,B) = \min_{\dim \mathcal{X} = n-j+1} \max_{\substack{x \in \mathcal{X} \\ \|x\| = 1}} \frac{x^*Ax}{x^*Bx}. \end{equation*}

This characterization, while useful in its own right, is tricky to deal with because it is nonlinear in A and B. It also treats A and B non-symmetrically, which should set off our alarm bells that there might be a better way. Indeed, the ingenious idea, due to G. W. Stewart in 1979, is to instead provide a variational characterization of the eigenangles! Specifically, Stewart was able to show16Stewart’s original definition of the eigenangles differs from ours; we adopt the definition of Mathias and Li. The result amounts to the same thing.

(1)   \begin{align*} \theta_j &= \min_{\dim \mathcal{X} = n-j+1} \max_{\substack{x\in\mathcal{X} \\ \|x\|=1}} \arg(x^*(A+\mathrm{i}B)x), \\ \theta_j &= \max_{\dim \mathcal{X} = j} \min_{\substack{x\in\mathcal{X} \\ \|x\|=1}} \arg(x^*(A+\mathrm{i}B)x), \end{align*}

for the eigenangles \theta_1\ge \theta_2\ge\cdots\ge \theta_n.17Note that since the cotangent is decreasing on [0,\pi], this means that the eigenvalues \lambda_1 = \cot \theta_1 \le \lambda_2 = \cot \lambda_2 \le \cdots are now in increasing order, in contrast to our convention from earlier in this section. This shows, in particular, that the field of values is subtended by the smallest and largest eigenangles.

The eigenangles satisfy a minimax variational characterization.

How Big is the Perturbation?

We’re tantalizingly close to our objective. The final piece in our jigsaw puzzle before we’re able to start proving perturbation theorems is to quantify the size of the perturbing matrices E and F. Based on what we’ve done so far, we see that the eigenvalues are natural associated with the complex number x^*(A+\mathrm{i}B)x, so it is natural to characterize the size of the perturbing pair (E,F) by the distance between x^*(A+\mathrm{i}B)x and x^*(\widetilde{A}+\mathrm{i}\widetilde{B})x. But the difference between these two quantities is just

    \begin{equation*} x^*(\widetilde{A}+\mathrm{i}\widetilde{B})x - x^*(A+\mathrm{i}B)x = x^*(E+\mathrm{i}F)x. \end{equation*}

We’re naturally led to the question: how big can x^*(E+\mathrm{i}F)x be? If the vector x has a large norm, then quite large, so let’s fix x to be a unit vector. With this assumption in place, the maximum size of x^*(E+\mathrm{i}F)x is simple the distance of the farthest point in the field of values E+iF from zero. This quantity has a name:

The numerical radius of a matrix G (in our case =E+\mathrm{i}F) is r(G) := \max_{\|x\|=1} |x^*Gx|.18This maximum is taken over all complex unit vectors x.

The size of the perturbation (E,F) is the numerical radius r(E+\mathrm{i}F) = \max_{\|x\|=1} |x^*(E+\mathrm{i}F)x|.

It is easy to upper-bound the numerical radius r(E+\mathrm{i}F) by more familiar quantities. For instance, once can straightforwardly show the bound r(E+\mathrm{i}F) \le \sqrt{\|E\|^2+\|F\|^2}, where \|\cdot\| is the spectral norm. We prefer to state results using the numerical radius because of its elegance: it is, in some sense, the “correct” measure of the size of the pair (E,F) in the context of this theory.

Stewart’s Perturbation Theory

Now, after many words of prelude, we finally get to our first perturbation theorem. With the work we’ve done in place, the result is practically effortless.

Let \widetilde{\theta}_1\ge \widetilde{\theta}_2\ge \cdots\ge\widetilde{\theta}_n denote the eigenangles of the perturbed pair (\widetilde{A},\widetilde{B}) and consider the jth eigenangle. Let \mathcal{X}^* be the subspace of dimension n-j+1 achieving the minimum in the first equation of the variational principle (1) for the original unperturbed pair (A,B). Then we have

(2)   \begin{equation*} \widetilde{\theta}_j = \min_{\dim \mathcal{X} = n-j+1} \max_{\substack{x\in\mathcal{X} \\ \|x\|=1}} \arg(x^*(\widetilde{A}+\mathrm{i}\widetilde{B})x) \le \max_{\substack{x\in\mathcal{X}^* \\ \|x\|=1}} \arg(x^*(A+\mathrm{i}B)x + x^*(E+\mathrm{i}F)x). \end{equation*}

This is something of a standard trick when dealing with variational problems in matrix analysis: take the solution (in this case the minimizing subspace) for the original problem and plug it in for the perturbed problem. The solution may no longer be optimal, but it at least gives an upper (or lower) bound. The complex number x^*(A+\mathrm{i}B)x must lie at least a distance c(A,B) from zero and |x^*(E+\mathrm{i}F)x| \le r(E+\mathrm{i}F). We’re truly toast if the perturbation is large enough to perturb x^*(\widetilde{A}+i\widetilde{B})x to be equal to zero, so we should assume that r(E+\mathrm{i}F) < c(A,B).

For our perturbation theory to work, we must assume r(E+\mathrm{i}F) < c(A,B).

x^*(A+\mathrm{i}B)x lies on or outside the circle centered at zero with radius c(A,B). x^*(\tilde{A}+\mathrm{i}B)x might lie anywhere in a circle centered at x^*(A+\mathrm{i}B)x with radius r(E+\mathrm{i}F), so one must have r(E+\mathrm{i}F) < c(A,B) to ensure the perturbed problem is nonsingular (equivalently x^*(\tilde{A}+\mathrm{i}B)x\ne 0 for every x).

Making the assumption that r(E+\mathrm{i}F) < c(A,B), bounding the right-hand side of (2) requires finding the most-counterclockwise angle necessary to subtend a circle of radius r(E+\mathrm{i}F) centered at x^*(A+\mathrm{i}B)x, which must lie a distance c(A,B) from the origin. The worst-case scenario is when x^*(A+\mathrm{i}B)x is exactly a distance c(A,B) from the origin, as is shown in the following diagram.

In the worst case, x^*(A+\mathrm{i}B)x lies on the circle centered at zero with radius c(A,B), which is subtended above by angle \theta_j + \sin^{-1}(r(E+iF)/c(A,B)).

Solving the geometry problem for the counterclockwise-most subtending angle in this worst-case sitation, we conclude the eigenangle bound \widetilde{\theta}_j -\theta_j \le \sin^{-1}(r(E+\mathrm{i}F)/c(A,B)). An entirely analogous argument using the max-min variational principle (1) proves an identical lower bound, thus showing

(3)   \begin{equation*} \sin |\theta_j - \widetilde{\theta}_j| \le \frac{r(E+\mathrm{i}F)}{c(A,B)}. \end{equation*}

In the language of eigenvalues, we have19I’m being a little sloppy here. For a result like this to truly hold, I believe all of the perturbed and unperturbed eigenangles should all be contained in one half of the complex plane.

    \begin{equation*} |\cot^{-1}(\widetilde{\lambda}_j) - \cot^{-1}(\lambda_j)| \le \sin^{-1}\left( \frac{r(E+\mathrm{i}F)}{c(A,B)} \right). \end{equation*}

Interpreting Stewart’s Theory

After much work, we have finally proven our first generalized eigenvalue perturbation theorem. After taking a moment to celebrate, let’s ask ourselves: what does this result tell us?

Let’s start with the good. This result shows us that if the perturbation, measured by the numerical radius r(E+iF), is much smaller than the definiteness of the original problem, measured by the Crawford number c(A,B), then the eigenangles change by a small amount. What does this mean in terms of the eigenvalues? For small eigenvalues (say, less than one in magnitude), small changes in the eigenangles also lead to small changes of the eigenvalues. However, for large eigenangles, small changes in the eigenangle are magnified into potentially large changes in the eigenvalues. One can view this result in a positive or negative framing. On the one hand, large eigenvalues could be subject to dramatic changes by small perturbations; on the other hand, the small eigenvalues aren’t “taken down with the ship” and are much more well-behaved.

Stewart’s theory is beautiful. The variational characterization of the eigenangles (1) is a master stroke and exactly the extension one would want from the standard Hermitian/symmetric theory. From the variational characterization, the perturbation theorem follows almost effortlessly from a little trigonometry. However, Stewart’s theory has one important deficit: the Crawford number. All that Stewart’s theory tells is that all of the eigenangles change by at most roughly “perturbation size over Crawford number”. If the Crawford number is quite small since the problem is nearly indefinite, this becomes a tough pill to swallow.

The Crawford number is in some ways essential: if the perturbation size exceeds the Crawford number, the problem can become indefinite or even singular. Thus, we have no hope of fully removing the Crawford number from our analysis. But might it be the case that some eigenangles change by much less than “pertrubation size over Crawford number”? Could we possibly improve to a result of the form “the eigenangles change by roughly perturbation size over something (potentially) much less than the Crawford number”? Sun improved Stewart’s analysis in 1982, but the scourge of the Crawford number remained.20Sun’s bound does not explicitly have the Crawford number, instead using the quantity \zeta := \max_{\|x\|=1} \sqrt{|x^*(E+\mathrm{i}F)x|/|x^*(A+iB)x|} and another hard-to-concisely describe quantity. In many cases, one has nothing better to do than to bound \zeta \le r(E+\mathrm{i}F)/c(A,B), in which case the Crawford number has appeared again. The theory of Mathias and Li, published in a technical report in 2004, finally produced a bound where the Crawford number is replaced.

The Mathias–Li Insight and Reduction to Diagonal Form

Let’s go back to the Stewart theory and look for a possible improvement. Recall in the Stewart theory that we considered the point x^*(A+\mathrm{i}B)x on the complex plane. We then argued that, in the worst case, this point would lie a distance c(A,B) from the origin and then drew a circle around it with radius r(E+\mathrm{i}F). To improve on Stewart’s bound, we must somehow do something better than using the fact that |x^*(A+\mathrm{i}B)x|\ge c(A,B). The insight of the Mathias–Li theory is, in some sense, as simple as this: rather than using the fact that |x^*(A+\mathrm{i}B)x| \ge c(A,B) (as in Stewart’s analysis), use how far x^*(A+\mathrm{i}B)x actually is from zero, where x is chosen to be the unit norm eigenvectors of (A,B).21This insight is made more nontrivial by the fact that, in the context of generalized eigenvalue problems, it is often not convenient to choose the eigenvectors to have unit norm. As Mathias and Li note, there are often two more popular normalizations for x. If B is positive definite, one often normalizes x such that \beta = x^*Bx = 1—the eigenvectors are thus made “B-orthonormal”, generalizing the fact that the eigenvectors of a Hermitian/symmetric matrix are orthonormal. Another popular normalization is to scale x such that |\alpha+i\beta| = |x^*(A+\mathrm{i}B)x| = 1. In this way, just taking the eigenvector x to have unit norm is already a nontrivial insight.

Before going further, let us quickly make a small reduction which will simplify our lives greatly. Letting X denote a matrix whose columns are the unit-norm eigenvectors of (A,B), one can verify that X^*AX and X^*BX are diagonal matrices with entries \alpha_1,\ldots,\alpha_n and \beta_1,\ldots,\beta_n respectively. With this in mind, it can make our lives a lot easy to just do a change of variables A \mapsto X^*AX and B\mapsto X^*BX (which in turn sends E\mapsto X^*EX and F \mapsto X^*FX). The change of variables A \mapsto X^*AX is very common in linear algebra and is called a congruence transformation.

Perform a change of variables by a congruence transformation with the matrix of eigenvectors.

While this change of variables makes our lives a lot easier, we must first worry about how this change of variables might effect the size of the perturbation matrices (E,F). It turns out this change of variables is not totally benign, but it is not maximally harmful either. Specifically, the spectral radius r(E+\mathrm{i}F) can grow by as much as a factor of n.22This is because, in virtue of having unit-norm columns, the spectral norm of the X matrix is \|X\| \le \|X\|_{\rm F} \le \sqrt{n}. Further, note the following variational characterization of the spectral radius r(E+\mathrm{i}F) = \max_\theta \| (\cos \theta) E + (\sin\theta) F \|. Plugging these two facts together yields r(X^*EX+\mathrm{i}\, X^*FX) \le \|X\|^2r(E+\mathrm{i}F) \le nr(E+\mathrm{i}F). This factor of n isn’t great, but it is much better than if the bound were to degrade by a factor of the condition number \|X\|\|X^{-1}\|, which can be arbitrarily large.

This change of variables may increase r(E+\mathrm{i}F) by at most a factor of n.

From now on, we shall tacitly assume that this change of variables has taken place, with A and B being diagonal and E and F being such that r(E+\mathrm{i}F) is at most a factor n larger than it was previously. We denote by \alpha_j and \beta_j the jth diagonal element of A and B, which are given by \alpha_j = x_j^*Ax_j and \beta_j = x_j^*Bx_j where x_j is the jth unit-norm eigenvector

Mathias and Li’s Perturbation Theory

We first assume the perturbation (E,F) is smaller than the Crawford number in the sense r(E+\mathrm{i}F) < c(A,B), which is required to be assured that the perturbed problem (\widetilde{A},\widetilde{B}) does not lose definiteness. This will be the only place in this analysis where we use the Crawford number.

Draw a circle of radius r(E+\mathrm{i}F) around \alpha_j + \mathrm{i}\beta_j.

If \theta_j is the associated eigenangle, then this circle is subtended by arcs with angles

    \begin{equation*} \ell_j = \theta_j - \sin^{-1}\left(\frac{r(E+\mathrm{i}F)}{|\alpha_j+\mathrm{i}\beta_j|}\right), \quad u_j = \theta_j + \sin^{-1}\left(\frac{r(E+\mathrm{i}F)}{|\alpha_j+\mathrm{i}\beta_j|}\right). \end{equation*}

It would be nice if the perturbed eigenangles \widetilde{\theta}_j were guaranteed to lie in these arcs (i.e., \ell_j \le \widetilde{\theta}_j \le u_j). Unfortunately this is not necessarily the case. If one \alpha_j + \mathrm{i}\beta_j is close to the origin, it will have a large arc which may intersect with other arcs; if this happens, we can’t guarantee that each perturbed eigenangle will remain within its individual arc. We can still say something though.

What follows is somewhat technical, so let’s start with the takeaway conclusion: \widetilde{\theta}_j is larger than any j of the lower bounds \ell_j. In particular, this means that \widetilde{\theta}_j is larger than the jth largest of all the lower bounds. That is, if we rearrange the lower bounds \ell_1,\ldots,\ell_n in decreasing order \ell_1^\downarrow \ge \ell_2^\downarrow \ge \cdots \ge \ell_n^\downarrow, we hace \widetilde{\theta}_j \ge \ell_j^\downarrow. An entirely analogous argument will give an upper bound, yielding

(4)   \begin{equation*} \ell_j^\downarrow \le \widetilde{\theta}_j \le u_j^\downarrow. \end{equation*}

For those interested in the derivation, read on the in the following optional section:

Derivation of the Mathias–Li Bounds
Since A and B are diagonal, the eigenvectors of the pair (A,B) are just the standard basis vectors, the jth of which we will denote e_j. The trick will be to use the max-min characterization (1) with the subspace \mathcal{X} spanned by some collection of j basis vectors e_{i_1},\ldots,e_{i_j}. Churning through a couple inequalities in quick fashion,23See pg. 17 of the Mathias and Li report. we obtain

    \begin{align*} \widetilde{\theta}_j &\ge \min_{\substack{x \in \mathcal{X} \\ \|x\| = 1}} \arg \left( x^*(A+\mathrm{i}B)x + x^*(E+\mathrm{i}F)x \right) \\ &\ge \min \left\{ \arg(y+z) : y \in \operatorname{conv} \{ \alpha_{i_1}+\mathrm{i}\beta_{i_1},\ldots,\alpha_{i_j}+\mathrm{i}\beta_{i_j} \}, z\in W(E+\mathrm{i}F) \} \\ &\ge \min \left\{ \arg(y+z) : y \in \operatorname{conv} \{ \alpha_{i_1}+\mathrm{i}\beta_{i_1},\ldots,\alpha_{i_j}+\mathrm{i}\beta_{i_j} \}, |z|\le r(E+\mathrm{i}F) \} \\ &\ge \min \left\{ \arg(w) : w\in\operatorname{conv} \bigcup_{k=1}^j \{ a\in\mathbb{C} : |\alpha_{i_k} + \mathrm{i}\beta_{i_k} - a| \le r(E+\mathrm{i}F) \} \right\} \\ &= \min_{k=1,\ldots,j} \ell_{i_k}. \end{align*}

Here, \operatorname{conv} denotes the convex hull. Since this holds for every set of indices i_1,\ldots,i_j, it in particular holds for the set of indices which makes \min_{k=1,\ldots,j} \ell_{i_k} the largest. Thus, \widetilde{\theta}_j \ge \ell^\downarrow_j.

How to Use Mathias–Li’s Perturbation Theory

The eigenangle perturbation bound (4) can be instantiated in a variety of ways. We briefly sketch two. The first is to bound |\alpha_j + \mathrm{i}\beta_j| by its minimum over all j, which then gives a bound on u^\downarrow_j (and \ell^\downarrow_j)

    \begin{equation*} |\alpha_j + \mathrm{i} \beta_j| \ge \min_{1\le j\le n} |\alpha_j + \mathrm{i} \beta_j| \implies u_j^\downarrow \le \theta_j + \sin^{-1} \frac{r(E+\mathrm{i}F)}{\min_{1\le j\le n} |\alpha_j + \mathrm{i} \beta_j|}. \end{equation*}

Plugging into (4) and simplifying gives

(5)   \begin{equation*} \sin \left| \widetilde{\theta}_j - \theta_j \right| \le \frac{r(E+\mathrm{i}F)}{\min_{1\le j\le n} |\alpha_j + \mathrm{i} \beta_j|}. \end{equation*}

This improves on Stewart’s bound (3) by replacing the Crawford number c(A,B) by \min_{1\le j\le n} |\alpha_j + \mathrm{i} \beta_j|; as Mathias and Li show \min_{1\le j\le n} |\alpha_j + \mathrm{i} \beta_j| is always smaller than or equal to c(A,B) and can be much much smaller.24Recall that Mathias and Li’s bound first requires us to do a change of variables where A and B both become diagonal, which can increase r(E+\mathrm{i}F) by a factor of n. Thus, for an apples-to-apples comparison with Stewart’s theory where A and B are non-diagonal, (5) should be interpreted as \sin \left| \widetilde{\theta}_j - \theta_j \right| \le n\,r(E+\mathrm{i}F)/\min{1\le j\le n} |\alpha_j + \mathrm{i} \beta_j|.

For the second instantiation (4), we recognize that if an eigenangle \theta_j is sufficiently well-separated from other eigenangles (relative to the size of the perturbation and \min_{1\le j\le n} |\alpha_j + \mathrm{i} \beta_j|), then we have u_j^\downarrow \le u_j and \ell_j^\downarrow \ge \ell_j. (The precise instantiation of “sufficiently well-separated” requires some tedious algebra; if you’re interested, see Footnote 7 in Mathias and Li’s paper.25You may also be interested in Corollary 2.2 in this preprint by myself and coauthors.) Under this separation condition, (4) then reduces to

(6)   \begin{equation*} \sin \left| \widetilde{\theta}_j - \theta_j \right| \le \frac{r(E+\mathrm{i}F)}{|\alpha_j + \mathrm{i} \beta_j|}. \end{equation*}

This result improves on Stewart’s result (4) by even more, since we have now replaced the Crawford number c(A,B) by |\alpha_j + \mathrm{i} \beta_j| for a sufficiently small perturbation. In fact, a result of this form is nearly as good as one could hope for.26Specifically, the condition number of the eigenangle \theta_j is |\alpha_j + \mathrm{i} \beta_j|^{-1}, so we know for sufficiently small perturbations we have \left| \widetilde{\theta}_j - \theta_j \right| \lessapprox (\mbox{size of perturbation}) \times |\alpha_j + \mathrm{i} \beta_j|^{-1} and |\alpha_j + \mathrm{i} \beta_j|^{-1} is the smallest number for which such a relation holds. Mathias and Li’s theory allows for a statement of this form to be made rigorous for a finite-size perturbation. Again, the only small deficit is the additional factor of “n” from the change of variables to diagonal form.

The Elegant Geometry of Generalized Eigenvalue Perturbation Theory

As I said at the start of this post, what fascinates me about this generalized eigenvalue perturbation is the beautiful and elegant geometry. When I saw it for the first time, it felt like a magic trick: a definite generalized eigenvalue problem with real eigenvalues was transformed by sleight of hand into a geometry problem on the complex plane, with solutions involving just a little high school geometry and trigonometry. Upon studying the theory, I began to appreciate it for a different reason. Upon closer examination, the magic trick was revealed to be a sequence of deductions, each logically following naturally from the last. To the pioneers of this subject—Stewart, Sun, Mathias, Li, and others—this sequence of logical deductions was not preordained, and their discovery of this theory doubtlessly required careful thought and leaps of insight. Now that this theory has been discovered, however, we get the benefit of retrospection, and can retell a narrative of this theory where each step follows naturally from the last. When told this way, one almost imagines being able to develop this theory by oneself, where at each stage we appeal to some notion of mathematical elegance (e.g., by treating A and B symmetrically) or by applying a standard trick (e.g., identifying a pair (\alpha,\beta) with the complex number \alpha + \mathrm{i}\beta). Since this theory took several decades to fall into place, we should not let this storytelling exercise fool us into thinking the prospective act of developing a new theory will be as straightforward and linear as this retelling, pruned of dead ends and halts in progress, might suggest.

That said, I do think the development of the perturbation theory of the generalized eigenvalue problem does have a lesson for those of us who seek to develop mathematical theories: be guided by mathematical elegance. At several points in the development of the perturbation theory, we obtained great gains by treating quantities which play a symmetric role in the problem symmetrically in the theory or by treating a pair of real numbers as a complex number and asking how to interpret that complex number. My hope is that this perturbation theory serves as a good example for how letting oneself be guided by intuition, a small array of standard tricks, and a search for elegance can lead one to conceptualize a problem in the right way which leads (after a considerable amount of effort and a few lucky breaks) to a natural solution.