Rejection Sampling

I am delighted to share that my paper Embrace rejection: Kernel matrix approximation with randomly pivoted Cholesky has been released on arXiv, joint with Joel Tropp and Rob Webber.

On the occasion of this release, I want to talk about rejection sampling, one of the most powerful ideas in probabilistic computing. My goal is to provide an accessible and general introduction to this technique, with a more “applied math” rather than statistical perspective. I will conclude with some discussion of how we use it in our paper.

Motivating Application: Randomized Kaczmarz

Consider the task of solving a linear system of equations Ax = b, where A \in \real^{n\times d} is a matrix and b \in \real^n is a vector. For simplicity, we assume that this system is consistent (i.e., there exists a solution x_\star such that Ax_\star = b). The randomized Kaczmarz algorithm is a mathematically elegant algorithm for solving this problem, due to Strohmer and Vershynin. Beginning with initial solution x \gets 0, randomized Kaczmarz repeatedly performs the following steps:

  1. Sample. Generate a random row index I according to the probability distribution \prob \{I = i\} = \norm{a_i}^2 / \norm{A}_{\rm F}^2. Here, and going forward, a_i^\top denotes the ith row of A and \norm{\cdot}_{\rm F} is the Frobenius norm.
  2. Update. Set x \gets x + (b_I - a_I^\top x)\cdot a_I / \norm{a_I}^2.

Convergence theory of the randomized Kaczmarz iteration is very well-studied; see, e.g., section 4.1 of this survey for an introduction.

For this post, we will consider a more basic question: How can we perform the sampling step 1? If n and d are not very large, this is an easy problem: Simply sweep through all the rows a_i and compute their norms \norm{a_i}^2. Then, sampling can be done using any algorithm for sampling from a weighted list of items. But what if n is very large? Suppose, even, that it is computationally expensive to read through the matrix, even once, to compute the row norms. What can we do now?

Here is one solution is using rejection sampling, due to Needell, Ward, and Srebro. Suppose that we know a bound on the row norms of A:

(1)   \[\norm{a_i}^2 \le B \quad \text{for each } i = 1,\ldots,n.\]

For instance, if the entries of A take values in the range [-1,1], then (1) holds with B = d. To sample a row i with probability \norm{a_i}^2/\norm{A}_{\rm F}^2, we perform the following the rejection sampling procedure:

  1. Propose. Draw a random row index 1\le J\le n uniformly at random (i.e., J is equally likely to be any row index between 1 and n.)
  2. Accept/reject. Make a random choice: With probability p_J = \norm{a_J}^2/B, accept and set I\gets J. With the remaining probability 1-p_J, reject and return to step 1.

Why does this procedure work? As an intuitive explanation, notice that each row i is equally likely to be proposed and has probability proportional to \norm{a_i}^2 of being accepted. Therefore, under this procedure, each row i has probability proportional to \norm{a_i}^2 of being selected, just as desired for randomized Kaczmarz. If this intuitive justification is unsatisfying, we’ll have a formal derivation of correctness below (in a more general context).

I find rejection sampling to be super neat. In this case, it allows us to use simple ingredients (uniform row sampling and an upper bound on the row norms) to accomplish a more complicated task (sampling according to the squared row norms). We can sample from an interesting probability distribution (the squared row norm distribution) by thinning out samples from a simple distribution. This is a powerful idea that has many applications. (And, I believe, many more applications are waiting to be discovered.)

Rejection Sampling in General

Let’s now discuss rejection sampling in a bit more generality. Say I have a list of n things each of which has a weight w_i\ge 0. Our goal is to choose a random index I with probability proportional to w_i, i.e., \prob \{ I = i\} = w_i / \sum_{j=1}^n w_j. We will call w the target distribution. (Note that we do not require the weights w_i to be normalized to satisfy \sum_{i=1}^n w_i =1.)

Suppose that sampling from the target distribution is challenging, but we can sample from a proposal distribution \rho, i.e., we can efficiently generate random J such that \prob \{J = i\} = \rho_i / \sum_{j=1}^n \rho_j. Further, suppose that the proposal distribution dominates the target distribution in the sense that

    \[w_i \le \rho_i \quad \text{for each } i=1,\ldots,n.\]

Under this general setting, we can sample I from the target distribution using rejection sampling, similar to above:

  1. Propose. Draw a random row index J from the proposal distribution.
  2. Accept/reject. Make a random choice: With probability p_J = w_J/\rho_J, accept and set I\gets J. With the remaining probability 1-p_J, reject and return to step 1.

We recognize the squared row-norm sampling above as an example of this general strategy with w_i = \norm{a_i}^2 and \rho_i = B for all i.

Let us now convince ourselves more carefully that rejection sampling works. On any given execution of the rejection sampling loop, the probability of proposing index i is \rho_i / \sum_{j=1}^n \rho_j, after which the probability of accepting i is w_i / \rho_i. Therefore, the probability of outputting i at any given loop is

    \[\prob \{\text{$i$ accepted this loop}\} = \frac{\rho_i}{\sum_{j=1}^n \rho_j} \cdot \frac{w_i}{\rho_i} = \frac{w_i}{\sum_{j=1}^n \rho_j}.\]

The probability that any index i on that loop is accepted is thus

    \[\prob \{\text{any accepted this loop}\} = \sum_{i=1}^n \prob \{\text{$i$ accepted this loop}\} = \frac{\sum_{i=1}^n w_i}{\sum_{i=1}^n \rho_i}.\]

The rejection sampling loop accepts with fixed positive probability each execution, so it eventually accepts with 100% probability. Thus, the probability of accepting i conditional on some index being accepted is

    \[\prob \{\text{$i$ accepted this loop} \mid \text{any accepted this loop}\} = \frac{\prob \{\text{$i$ accepted this loop}\}}{\prob \{\text{any accepted this loop}\}} = \frac{w_i}{\sum_{j=1}^n w_j}.\]

Therefore, rejection sampling outputs a sample I from the target distribution w as promised.

As this analysis shows, the probability of accepting on any given execution of the rejection sampling loop is \sum_{i=1}^n w_i / \sum_{i=1}^n \rho_i, the ratio of the total mass of the target w_i to the total mass of the proposal \rho_i. Thus, rejection sampling will have a high acceptance rate if w_i \approx \rho_i and a low acceptance rate if w_i \ll \rho_i. The conclusion for practice is that rejection sampling is only computationally efficient if one has access to a good proposal distribution \rho_i, that is both easy to sample from and close to the target w_i\approx \rho_i.

Here, we have presented rejection sampling as a strategy for sampling from a discrete set of items i=1,\ldots,n, but this not all that rejection sampling can do. Indeed, one can also use rejection sampling for sampling a real-valued parameters x \in \real or a multivariate parameter x \in \real^n from a given (unnormalized) probability density function (or, if one likes, an unnormalized probability measure).

Before moving on, let us make one final observation. A very convenient thing about rejection sampling is that we only need the unnormalized probability weights w_i and \rho_i to implement the procedure, even though the total weights \sum_{j=1}^n w_i and \sum_{j=1}^n \rho_i are necessary to define the sampling probabilities \prob \{I = i\} = w_i / \sum_{j=1}^n w_j and \prob \{J = i\} = \rho_i / \sum_{j=1}^n \rho_j. This fact is necessary for many applications of rejection sampling. For instance, in the randomized Kaczmarz use, rejection sampling would be much less useful if we needed the total weight \sum_{i=1}^n \norm{a_i}^2 = \norm{A}_{\rm F}^2, as computing \norm{A}_{\rm F}^2 requires a full pass over the data to compute.

Application: Randomly pivoted Cholesky

Another application of rejection sampling appears is to accelerate the randomly pivoted Cholesky algorithm, the subject of my recent paper. Here, I’ll provide an overview of the main idea with a focus on the role of rejection sampling in the procedure. See the paper for more details. Warning: Even as simplified here, this section presents a significantly more involved application of rejection sampling than we saw previously with randomized Kaczmarz!

Randomly pivoted Cholesky (RPCholesky) is a simple, but powerful algorithm for computing a low-rank approximation to a symmetric positive semidefinite matrix A, and it can be used to accelerate kernel machine learning algorithms.

Conceptually, RPCholesky works as follows. Let A^{(0)} \coloneqq A denote the initial matrix and \hat{A}^{(0)} \coloneqq 0 denote a trivial initial approximation. For j=0,1,2,\ldots,k-1, RPCholesky performs the following steps:

  1. Random sampling. Draw a random index I_{j+1} with probability \prob \{ I_{j+1} = i\} = A^{(j)}_{ii} / \sum_{i=1}^n A^{(j)}_{ii}.
  2. Update approximation. Update \hat{A}^{(j+1)} \gets \hat{A}^{(j)} + a_i^{(j)}(a_i^{(j)})^\top / A_{ii}^{(j)}. Here, a_i^{(j)} denotes the ith column of A^{(j)}.
  3. Update matrix. Update A^{(j+1)} \gets A^{(j)} + a_i^{(j)}(a_i^{(j)})^\top / A_{ii}^{(j)}.

The result of this procedure is a rank-k approximation \hat{A}^{(k)}. With an optimized implementation, RPCholesky requires only O(k^2N) operations and evaluates just (k+1)N entries of the input matrix A.

The standard RPCholesky algorithm is already fast, but there is room for improvement. The standard RPCholesky method processes the matrix A one column a_i at a time, but modern computers are faster at processing blocks of columns. Using the power of rejection sampling, we can develop faster versions of RPCholesky that take advantage of blocked computations. These blocked implementations are important: While we could handle a million-by-million matrix using the standard RPCholesky method, blocked implementations can be up to 40\times faster. In this post, I’ll describe the simplest way of using rejection sampling to implement RPCholesky.

To set the stage, let’s first establish some notation and make a few observations. Let S_j = \{I_1,\ldots,I_j\} denote the first j selected random indices. With a little matrix algebra, one can show that the approximation \hat{A}^{(j)} produced by j steps of RPCholesky obeys the formula1For those interested, observe that \hat{A}^{(j)} is an example of a Nyström approximation. The connection between Nyström approximation and Cholesky decomposition is explained in this previous post of mine.

(1)   \[\hat{A}^{(j)} = A(:,S_j) A(S_j,S_j)^{-1}A(S_j,:). \]

Here, we are using MATLAB notation for indexing the matrix A. The residual matrix A^{(j)} is given by

    \[A^{(j)} = A - \hat{A}^{(j)} = A - A(:,S_j) A(S_j,S_j)^{-1}A(S_j,:).\]

Importantly for us, observe that we can evaluate the ith diagonal entry of A^{(j)} using the formula

(2)   \[A^{(j)}_{ii} = A_{ii} - A(i,S_j) A(S_j,S_j)^{-1} A(S_j,i).\]

Compared to operating on the full input matrix A, evaluating an entry A^{(j)}_{ii} is cheap, just requiring some arithmetic involving matrices and vectors of size j.

Here, we see a situation similar to randomized Kaczmarz. At each step of RPCholesky, we must sample a random index I_{j+1} from a target distribution w_i = A_{ii}^{(j)}, but it is expensive to evaluate all of the w_i‘s. This is a natural setting for rejection sampling. As proposal distribution, we may use the initial diagonal of A, \rho_i = A_{ii}. (One may verify that, as required, w_i = A_{ii}^{(j)} \le A_{ii} = \rho_i for all i.) This leads to a rejection sampling implementation for RPCholesky:

  1. Sample indices. For j=0,1,\ldots,k-1, sample random indices I_{j+1} from the target distribution w_i = A^{(j)}_{ii} using rejection sampling with proposal distribution \rho_i = A_{ii}. Entries A_{ii}^{(j)} are evaluated on an as-needed basis using (2).
  2. Build the approximation. Form the approximation \hat{A}^{(k)} all at once using (1).

By using rejection sampling, we have gone from an algorithm which handles the matrix one column at a time to a new algorithm which processes columns of the matrix A all at once through the formula (1). In the right situations, this new algorithm can be significantly faster than the original RPCholesky method.2In its essence, this rejection sampling algorithm for RPCholesky was first proposed in this paper of mine, which uses rejection sampling to apply RPCholesky to infinite-dimensional positive definite kernel operators. The ability to handle infinite-dimensional sampling problems is another great strength of the rejection sampling approach.

We have now seen two extreme cases: the standard RPCholesky algorithm that processes the columns of A one at a time and a rejection sampling implementation that operates on the columns of A all at once. In practice, the fastest algorithm sits between these extremes, using rejection sampling to sample column indices in moderate-size blocks (say, between 100 and 1000 columns at a time). This “goldilocks” algorithm is the accelerated RPCholesky method that we introduce in our paper. Check it out for details!

Conclusion: Rejection Sampling, an Algorithm Design Tool

We’ve now seen two applications, randomized Kaczmarz and randomly pivoted Cholesky, where rejection sampling can be used to speed up an algorithm. In both cases, we wanted to sample from a distribution over n row or column indices, but it was expensive to perform a census to compute the probability weight of each item individually. Rejection sampling offers a different approach, where we generate samples from a cheap-to-sample proposal distribution and only evaluate the probability weights of the proposed items. In the right situations, this rejection sampling approach can lead to significant computational speedups.

Rejection sampling has been used by statistics-oriented folks for decades, but the power of this technique seems less well-known to researchers in applied math, scientific computation, and related areas. I think there are a lot more exciting ways that we can use rejection sampling to design better, faster randomized algorithms.

3 thoughts on “Rejection Sampling

Leave a Reply

Your email address will not be published. Required fields are marked *