I am delighted to share that my paper Randomized Kaczmarz with tail averaging, joint with Gil Goldshlager and Rob Webber, has been posted to arXiv. This paper in particular really benefited from a lot of feedback from discussions with friends, colleagues, and experts, and I’d like to thank everyone who gave us feedback on this paper.
In this post, I want to provide a different and complementary perspective on the results of this paper, and provide some more elementary results and derivations that didn’t make the main paper.
The randomized Kaczmarz (RK) method is an iterative method for solving systems of linear equations , whose dimensions will be throughout this post. Beginning from a trivial initial solution , the method works by repeating the following two steps for :
- Randomly sample a row of with probability
- Orthogonally project onto the solution space of the equation , obtaining .
The main selling point of RK is that it only interacts with the matrix through row accesses, which makes the method ideal for very large problems where only a few rows of can fit in memory at a time.
When applied to a consistent system of linear equations (i.e., a system possessing a solution satisfying ), RK is geometrically convergent:
(1)
Here,(2)
is known as the Demmel condition number, and are the singular values of .1Personally, I find it convenient to write the Demmel condition number as , where is the ordinary condition number and is the stable rank of the matrix . The stable rank is a smooth proxy for the rank or dimensionality of the matrix . Using this parameter, the rate of convergence is roughly , so it takes roughly row accesses to reduce the error by a constant factor. Compare this to gradient descent, which requires row accesses. However, for inconsistent problems, RK does not converge at all. Indeed, since each step of RK updates the iterate such that an equation hold exactly and no solution satisfies all the equations simultaneously, the RK iterates continue to stochastically fluctuate no matter how long the algorithm is run.However, while the RK iterates continue to randomly fluctuate when applied to an inconsistent system, their expected value does converge. In fact, it converges to the least-squares solution2If the matrix is rank-deficient, then we define to be the minimum-norm least-squares solution , which can be expressed using the Moore–Penrose pseudoinverse . to the system, defined as
Put differently, the bias of the RK iterates as estimators to the least-squares solution converge to zero, and the rate of convergence is geometric. Specifically, we have the following theorem:
Theorem 1 (Exponentially Decreasing Bias): The RK iterates have an exponentially decreasing bias
(3)
Observe that the rate of convergence (3) for the bias is twice as fast as the rate of convergence for the error in (1). This factor of two was previously observed by Gower and Richtarik in the context of consistent systems of equations.
The proof of Theorem 1 is straightforward, and we will present it at the bottom of this post. First, we will discuss a couple of implications. First, we develop convergent versions of the RK algorithm using tail averaging. Second, we explore what happens when we implement RK with different sampling probabilities .
Tail Averaging
It may seem like Theorem 1 has little implication for practice. After all, just because the expected value of becomes closer and closer to , it need not be the case that is close to . However, we can improve the quality of the approximate solution by averaging.
There are multiple different possible ways we could use averaging. A first idea would be to run RK multiple times, obtaining multiple solutions which could then be averaged together. This approach is inefficient as each solution is computed separately.
A better strategy is tail averaging. Fix a burn-in time , chosen so that the bias is small. For each , is a nearly unbiased approximation to the least-squares solution . To reduce variance, we can average these estimators together
The estimator is the tail-averaged randomized Kaczmarz (TARK) estimator. By Theorem 1, we know the TARK estimator has an exponentially small bias:
In our paper, we also prove a bound for the mean-square error:
We see that the rate of convergence is geometric in the burn-in time and occurs at an algebraic, Monte Carlo, rate in the final time . While the Monte Carlo rate of convergence may be unappealing, it is known that this rate of convergence is optimal for any method that accesses row–entry pairs ; see our paper for a new derivation of this fact.
Tail averaging can be an effective method for squeezing a bit more accuracy out of the RK method for least-squares problems. The figure below shows the error of different RK methods applied to a random least-squares problem, including plain RK, RK with thread averaging (RKA), and RK with underrelaxation (RKU);3The number of threads is set to 10, and the underrelaxation parameter is . We found this underrelaxation parameter to lead to a smaller error than the other popular underrelaxation parameter schedule . see the paper’s Github for code. For this problem, tail-averaged randomized Kaczmarz achieves the lowest error of all of the methods considered, being 6× smaller than RKA, 22× smaller than RK, and 10⁶× smaller than RKU.
What Least-Squares Solution Are We Converging To?
A second implication of Theorem 1 comes from reanalyzing the RK algorithm if we change the sampling probabilities. Recall that the standard RK algorithm draws random row indices using squared row norm sampling:
We have notated the RK sampling probability for row as .
Using the standard RK sampling procedure can sometimes be difficult. To implement it directly, we must make a full pass through the matrix to compute the sampling probabilities.4If we have an upper bound on the squared row norms, there is an alternative procedure based on rejection sampling that avoids this precomputation step. It can be much more convenient to sample rows uniformly at random .
To do the following analysis in generality, consider performing RK using general sampling probabilities :
What happens to the bias of the RK iterates then?
Define a diagonal matrix
One can check that the RK algorithm with non-standard sampling probabilities is equivalent to the standard RK algorithm run on the diagonally reweighted least-squares problem
In particular, applying TARK with uniform sampling probabilities, the tail-averaged iterates will converge to the weighted least-squares problem rather than the original least-squares solution .
I find this very interesting. In the standard RK method, the squared row norm sampling distribution is chosen to ensure rapid convergence of the RK iterates to the solution of a consistent system of linear equations. However, for a consistent system, the RK method will always converge to the same solution no matter what row sampling strategy is chosen (as long as every non-zero row has a positive probability of being picked). In the least-squares context, however, the conclusion is very different: the choice of row sampling distribution not only affects the rate of convergence, but also which solution is being converged to!
As the plot below demonstrates, the original least-squares solution and re-weighted least-squares solution can sometimes be quite different from each other. This plot shows the results of fitting a function with many kinks (show as a black dashed curve) using a polynomial function [mfn}Note that, for this experiment we represent the polynomial using its monomial coefficients , which has issues with numerical stability. It’s better to use a representation using Chebyshev polynomials. We use this example only to illustrate the difference between the weighted and original least-squares solution.[/mfn} at equispaced points. We compare the unweighted least-squares solution (orange solid curve) to the weighted least-squares solution using uniform RK weights (blue dash-dotted curve). These two curves differ meaningfully, with the weighted least-squares solution having higher error at the ends of the interval but more accuracy in the middle. These differences can be explained looking at the weights (diagonal entries of , grey dotted curve), which are lower at the ends of the interval than in the center.
Does this diagonal rescaling issue matter? Sometimes not. In many applications, the weighted and un-weighted least squares solutions will both be fine. Indeed, in the above example, neither the weighted nor un-weighted solutions are the “right” one; the weighted solution is more accurate in the interior of the domain and less accurate at the boundary. However, sometimes getting the true least-squares solution matters, or the amount of reweighting done by uniform sampling is too aggressive for a problem. In these cases, using the classical RK sampling probabilities may be necessary. Fortunately, rejection sampling can often be used to perform squared row norm sampling; see this blog post of mine for details.
Proof of Theorem 1
Let us now prove Theorem 1, the asymptotic unbiasedness of randomized Kaczmarz. We will assume throughout that is full-rank; the rank-deficient case is similar but requires a bit more care.
Begin by rewriting the update rule in the following way
Now, letting denote the average over the random index , we compute
We can evaluate the sums and directly. Therefore, we obtain
Thus, taking the (total) expectation of both sides, we get
Iterating this equation and using the initial condition , we obtain
(4)
The equation (4) expresses the expected RK iterate using a matrix geometric series. Recall that the infinite geometric series with a scalar ratio satisfies the formula
Substituting , we get
for any . With a little effort, one can check that the same formula
for a square matrix satisfying . These conditions hold for the matrix since is full-rank, yielding
(5)
In anticipation of plugging this result into (4), we can break the infinite sum on the right-hand side into two pieces:(6)
We are at the home stretch. Plugging (6) into (4) and using the normal equations , we obtain
Rearrange to obtain
Now apply (5) and the normal equations again to obtain
Take norms and use the submultiplicative property of the spectral norm to obtain
(7)
To complete the proof we must evaluate the norm of the matrix . Let be a (thin) singular value decomposition, where . Then
We recognize this as a matrix whose eigenvectors are and whose eigenvalues are the diagonal entries of the matrix
All eigenvalues are nonnegative and the largest eigenvalue is . We have invoked the definition of the Demmel condition number (2). Therefore, plugging into (7), we obtain
as promised.