In this article we will look at a practically important measure of efficiency in importance sampling, the so called effective sample size (ESS) estimate. This measure was proposed by Augustine Kong in 1992 in a technical report which until recently has been difficult to locate online, but after getting in contact with the University of Chicago I am pleased that the report is now available (again):
- Augustine Kong, "A Note on Importance Sampling using Standardized Weights", Technical Report 348, PDF, Department of Statistics, University of Chicago, July 1992.
Before we discuss the usefulness of the effective sample size, let us first define the notation and context for importance sampling.
Importance sampling is one of the most generally applicable method to sample from otherwise intractable distributions. In machine learning and statistics importance sampling is regularly used for sampling from distributions in low dimensions (say, up to maybe 20 dimensions). The general idea of importance sampling has been extended since the 1950s to the sequential setting and the resulting class of modern Sequential Monte Carlo (SMC) methods constitute the state of the art Monte Carlo methods in many important time series modeling applications.
The general idea of importance sampling is as follows. We are interested in computing an expectation,
If we can sample from \(p\) directly, the standard Monte Carlo estimate is possible, and we draw \(X_i \sim p\), \(i=1,\dots,n\), then use
In many applications we cannot directly sample from \(p\). In this case importance sampling can still be applied by sampling from a tractable proposal distribution \(q\), with \(X_i \sim q\), \(i=1,\dots,n\), then reweighting the sample using the ratio \(p(X_i)/q(X_i)\), leading to the standard importance sampling estimate
In case \(p\) is known only up to an unknown normalizing constant, the so called self-normalized importance sampling estimate can be used. Denoting the weights by \(w(X_i) = \frac{p(X_i)}{q(X_i)}\) it is defined as
The quality of this estimate chiefly depends on how good the proposal distribution \(q\) matches the form of \(p\). Because \(p\) is difficult to sample from, it typically is also difficult to make a precise statement about the quality of approximation of \(q\).
The effective sample size solves this issue: it can be used after or during importance sampling to provide a quantitative measure of the quality of the estimated mean. Even better, the estimate is provided on a natural scale of worth in samples from \(p\), that is, if we use \(n=1000\) samples \(X_i \sim q\) and obtain an ESS of say 350 then this indicates that the quality of our estimate is about the same as if we would have used 350 direct samples \(X_i \sim p\). This justifies the name effective sample size.
Since the late 1990s the effective sample size is popularly used as a reliable diagnostic in importance sampling and sequential Monte Carlo applications. Sometimes it even informs the algorithm during sampling; for example, one can continue an importance sampling method until a certain ESS has been reached. Another example is during SMC where the ESS is often used to decide whether operations such as resampling or rejuvenation are performed.
Definition
Two alternative but equivalent definitions exist. Assume normalized weights \(w_i \geq 0\) with \(\sum_{i=1}^n w_i = 1\). Then, the original definition of the effective sample size estimate is by Kong, popularized by Jun Liu in this paper, as
where \(\textrm{Var}_q(W) = \frac{1}{n-1} \sum_{i=1}^n (w_i - \frac{1}{n})^2\). The alternative form emerged later (I did not manage to find its first use precisely), and has the form
When the weights are unnormalized, we define \(\tilde{w}_i = w_i / (\sum_{i=1}^n w_i)\) and see that
As is often the case in numerical computation in probabilistic models the quantities are often stored in log-domain, i.e. we would store \(\log w_i\) instead of \(w_i\), and compute the above equations in log-space.
Example
As a simple example we set the target distribution to be a \(\textrm{StudentT}(0,\nu)\) with \(\nu=8\) degrees of freedom, and the proposal to be a Normal \(\mathcal{N}(\mu,16)\). We then visualize the ESS as a function of the shift \(\mu\) of the Normal proposal. The sample size should decrease away from the true mean (zero) and be highest at zero.
This is indeed what happens in the above plot and, not shown, the estimated variance from the ESS agrees with the variance over many replicates.
Derivation
The following derivation is from Kong's technical report, however, to make it self-contained and accessible I fleshed out some details and give explanations inline.
We start with an expression for \(\textrm{Var}(\bar{\mu})\). This is a variance of a ratio expression with positive denominator; hence we can apply the multivariate delta method for ratio expressions (see appendix below) to obtain an asymptotic approximation. Following Kong's original notation we define \(W_i = w(X_i)\) and \(W=W_1\), as well as \(Z_i = h(X_i) w(X_i)\) and \(Z = Z_1\). Then we have the asymptotic delta method approximation
We can simplify this somewhat intimidating expression by realizing that
(For the unnormalized case the derivation result is the same because the ratio \(\bar{\mu}\) does not depend on the normalization constant.) Then we can simplify \((\ref{eqn:delta1})\) to
The next step is to realize that \(\mathbb{E}_q Z = \int w(x) h(x) q(x) \,\textrm{d}x = \int \frac{p(x)}{q(x)} q(x) h(x) \,\textrm{d}x = \int h(x) p(x) \,\textrm{d}x = \mu.\) Thus \((\ref{eqn:delta2})\) further simplifies to
This is great progress, but we need to nibble on this expression some more. Let us consider the parts (A) and (B), in this order.
(A). To simplify this expression we can leverage the definition of the covariance and then apply the known relations of our special expectations. This yields.
Note the change of measure from \(q\) to \(p\) in the last step. To break down the expectation of the product further we use the known rules about expectations, namely \(\textrm{Cov}(X,Y) = \mathbb{E}[XY] - (\mathbb{E}X)(\mathbb{E}Y)\), which leds us to
(B). First we expand the variance by its definition, then simplify.
For approaching \(\mathbb{E}_p[W H^2]\) we need to leverage the second-order delta method (see appendix) which gives the following approximation,
Ok, almost done. We now leverage our work to harvest:
Finally, we can reduce \((\ref{eqn:H1})\) further by
For the other term we have
This simplifies \((\ref{eqn:H1})\) to the following satisfying expression.
This reads as "the variance of the self-normalized importance sampling estimate is approximately equal to the variance of the simple Monte Carlo estimate times \(1 + \textrm{Var}_q(W)\)."
Therefore, when taking \(n\) samples to compute \(\bar{\mu}\) the effective sample size is estimated as
Two comments:
- We can estimate \(\textrm{Var}_q(W)\) by the sample variance of the normalized importance weights.
- This estimate does not depend on the integrand \(h\).
The simpler form of the ESS estimate can be obtained by estimating
which yields
Conclusion
Monte Carlo methods such as importance sampling and Markov chain Monte Carlo can fail in case the proposal distribution is not suitable chosen. Therefore, we should always employ diagnostics, and for importance sampling the effective sampling size diagnostic has become the standard due to its simplicity, intuitive interpretation, and its robustness in practical applications.
However, the effective sample size can fail, for example when all proposal samples are in a region where the target distribution has few probability mass. In that case, the weights would be approximately equal and the ESS close to optimal, failing to diagnose the mismatch between proposal and target distribution. This is, in a way, unavoidable: if we never get to see a high probability region of the target distribution, the low value of our samples is hard to recognize.
For another discussion on importance sampling diagnostics and an alternative derivation, see Section 9.3 in Art Owen's upcoming Monte Carlo book. Among many interesting things in that chapter, he proposes an effective sample size statistic specific to the particular integrand \(h\). For this, redefine the weights as
then use the normal \(1/\sum_i w_h(X_i)^2\) estimate. This variant is more accurate because it takes the integrand into account.
Addendum: This paper by Martino, Elvira, and Louzada, takes a detailed look at variations of the effective sample size statistic.
Appendix: The Multivariate Delta Method
The delta method is a classic method using in asymptotic statistics to obtain limiting expressions for the mean and variance of functions of random variables. It can be seen as the statistical analog of the Taylor approximation to a function.
The multivariate extension is also classic, and the following theorem can be found in many works, I picked the one given as Theorem 3.7 in DasGupta's book on asymptotic statistics (by the way, this book is a favorite of mine for its accessible presentation of many practical result in classical statistics). A more advanced and specialized book on expansions beyond the delta method is Christopher Small's book on the topic.
Delta Method for Distributions
Theorem (Multivariate Delta Method for Distributions). Suppose \(\{T_n\}\) is a sequence of \(k\)-dimensional random vectors such that
Let \(g:\mathbb{R}^k \to \mathbb{R}^m\) be once differentiable at \(\theta\) with the gradient vector \(\nabla g(\theta)\). Then
provided \(\nabla g(\theta)^T \Sigma(\theta) \nabla g(\theta)\) is positive definite.
This simply says that if we have a vector \(T\) of random variables and we know that \(T\) converges asymptotically to a Normal, then we can make a similar statement about the convergence of \(g(T)\).
For the effective sample size derivation we will need to instantiate this theorem for a special case of \(g\), namely where \(g: \mathbb{R}^2 \to \mathbb{R}\) and \(g(x,y) = \frac{x}{y}\). Let's quickly do that. We have
We further define \(X_i \sim P_X\), \(Y_i \sim P_Y\) iid, \(X=X_1\), \(Y=Y_1\),
assuming our sequence \(\frac{1}{n} \sum_{i=1}^n X_i \to \mathbb{E}X\) and \(\frac{1}{n} \sum_{i=1}^n Y_i \to \mathbb{E}Y\). For the covariance matrix we know that the empirical average of \(n\) iid samples has a variance as \(1/n\), that is
and similar for the covariance, so we have
Applying the above theorem we have for the resulting one-dimensional transformed variance
One way to interpret the quantity \(B(\theta)\) is that the limiting variance of the ratio \(X/Y\) depends both on the variances of \(X\) and of \(Y\), but crucially it depends most sensitively on \(\mathbb{E}Y\) because this quantity appears in the denominator: small values of \(Y\) have large effects on \(X/Y\).
This is an asymptotic expression which is based on the assumption that both \(X\) and \(Y\) are concentrated around the mean so that the linearization of \(g\) around the mean will incur a small error. As such, this approximation may deteriorate if the variance of \(X\) or \(Y\) is large so that the linear approximation of \(g\) deviates from the actual values of \(g\).
(For an exact expansion of the expectation of a ratio, see this 2009 note by Sean Rice.)
Second-order Delta Method
The above delta method can be extended to higher-order by a multivariate Taylor expansion. I give the following result without proof.
Theorem (Second-order Multivariate Delta Method). Let \(T\) be a \(k\)-dimensional random vectors such that \(\mathbb{E} T = \theta\). Let \(g:\mathbb{R}^k \to \mathbb{R}\) be twice differentiable at \(\theta\) with Hessian \(H(\theta)\). Then
For the proof of the effective sample size we need to apply this theorem to the function \(g(X,Y)=XY^2\) so that
Then the above result gives