# Approximate Bayesian Computation: Summary Statistics and Tolerance

Last time I introduced a basic example of the Approximate Bayesian Computation methodology for estimating posterior expectations in the case of unavailable likelihoods. One of the main issues I mentioned was that it could take a long time to accept proposals. This is caused by three factors.

1. The algorithm is being overly picky about its criteria for accepting proposals. In our Lucky Dip example, we saw that proposals were accepted when the generated play record for the players exactly matched the observed record. All we require is that the frequency of each possible outcome is the same between the two records. In other words, the algorithm is worried about the order in which the outcomes occurred when it doesn’t need to.

2. The outcome, even when stripping out the extraneous information mentioned in the point above, is so unlikely that generating a matching dataset will still take a long time. This is especially true if any element of the data is continuous, because the probability of generating an exactly matching dataset will be zero.

3. It takes a long time to generate datasets due to the complexity of the model. There is little we can do about this, outside of making sure our model is no more complicated than is necessary.

We thus need to consider stripping out superfluous information, and accepting proposals whose data is merely close to the observed data, to mitigate factors 1 and 2 respectively.

1. Summary Statistics

If we flip a coin several times to decide whether it’s fair, we obviously don’t care about the order in which the heads and tails appear. We just care about how often they each turn up. We could also just look at the proportion of flips that come up heads, without recording how many flips we observed. These are both summary statistics, that take the original data and express it in a different, usually smaller, form. Formally, we take the data in the form of the random variable $X$, and calculate the summary statistic $S=S(X)$, which is also a random variable.

Some summaries are better than others. A higher number of flips increases the accuracy of the result, but our second summary above doesn’t keep a record of this. In other words, only recording the proportion of flips that come up heads loses relevant information, so we should be using a summary such as the first one, which does not.

Ideal summary statistics, that don’t lose relevant information, are referred to as sufficient statistics. Formally, the posterior of the parameters given a sufficient statistic is equal to the posterior given the original data, for any possible values of the data and statistic:

$p(\theta | S=s^*=S(x^*) ) =p(\theta | X=x^*) \textrm{ for all } \theta,x^*: p(\theta,x^*) >0.$

The original data, of course, is itself a sufficient statistic. The best-case choice would be what is called a minimal sufficient statistic, which is a sufficient statistic with the smallest possible number of dimensions.

Minimal sufficient statistics are desirable, because, as the number of dimensions increases, the chance of hitting any region inside the space will generally decrease. This is known as the curse of dimensionality, and is why we often want to decrease the dimensions as much as possible.

Here’s a lazy analogy. Consider a square, that contains a circle touching the sides of the square. Now say we choose a random point inside the square, with each point equally likely. The probability of the point being inside the circle is equal to the proportion of the square’s area it takes up, which is $\pi/4$.

Now add another dimension, so that we are choosing a point inside a cube, and seeing if it’s inside a sphere. That sphere now takes up less of the available space: the probability of choosing a point inside it is $\pi/6$. The probability decreases as the number of dimensions increases: for $q$ dimensions the probability is $\frac{\pi^{q/2} } {2^q\Gamma(q/2+1) }$. Obviously, we’re seldom trying to hit a region so large compared to the entire space, but hopefully you get the idea that more dimensions means less chance of success.

This leads us to the following ABC algorithm.

ABC2
1. Decide on the acceptance number $n$.
2. Sample a proposal $\hat{\theta}$ from the prior $p(\theta|M)$.
3. Calculate the observed statistic $s^*=S(x^*)$.
4. Generate a dataset $\hat{x}$ from the likelihood $p(X|M,\hat{\theta} )$.
5. Calculate the statistic $\hat{s} =S(\hat{x} )$ for the generated dataset.
6. Accept $\hat{\theta}$ if $\hat{s} =s^*$, else reject.
7. Repeat steps 4-6 until $n$ proposals have been accepted.
8. Estimate the posterior expectation of $\theta$ as $\mathbb{E} (\theta|M,X=x^*) \simeq\frac{1} {n} \sum_{k=1}^n \hat{\theta}_k$, the mean of the accepted proposals.

Since the proposals are only accepted when the generated statistic is equal to the observed statistic, the accepted proposals are taken exactly from the distribution $p(\theta|S=s^*)$. This is equal to the posterior distribution $p(\theta|X=x^*)$ if the statistic is sufficient.

Let us consider the Lucky Dip example again. The observed player record is $(3,0,0,2,3)$, and we can show that counting the frequency of each outcome – in this case $(2,0,1,2)$ – is sufficient. In fact, we can show it’s minimal sufficient. More on this later. For a tank of $f$ fish, the ABC method for approximating the number $r$ of red fish is the following.

ABC2FISH
1. Decide on the acceptance number $n$.
2. Sample a proposal $\hat{r} \in \{0,1,2,\ldots,f\}$, with each possibility equally likely.
3. Generate a play record of five players, where each player starts with $\hat{r}$ red fish and $f-\hat{r}$ blue fish in the tank.
4. Count the number of losers, and the number of players that win on the first, second, and third draws, to calculate the generated statistic.
5. Accept $\hat{r}$ if the generated statistic matches the observed statistic $(2,0,1,2)$, else reject.
6. Repeat steps 2-5 until $n$ proposals have been accepted.
7. Estimate the posterior expectation of $r$ as $\mathbb{E} (r|M,X=(3,0,0,2,3)) \simeq\frac{1} {n} \sum_{k=1}^n \hat{r}_k$, the mean of the accepted proposals.

This is like the ABC1FISH algorithm we used before for the same problem, but uses a summary statistic with four dimensions instead of the five-dimensional data. That isn’t a great reduction, but consider that, if the play record was longer, the statistic wouldn’t increase in size. For large amounts of observed data, this statistic will thus effect a large reduction in dimensionality.

Again, I took a hundred ABC estimates for each value of $n$. The results are given below.

1. Stripping away irrelevant information should give similar results in less time. Comparing the above boxplot with the one for ABC1FISH will show the results to be roughly the same. Since we haven’t lost any relevant information, this is to be expected. However, where ABC1FISH took a day or two to run, ABC2FISH took a few hours.

2. It is simple to find a sufficient statistic for such a simple problem. In fact, if the likelihood is known we can also find the minimal sufficient statistic, since this is simply a matter of listing all the expressions in the likelihood in which the data elements appear.

For the example above, the likelihood of a single play, for a given parameter value, depends on the outcome. Thus, the total likelihood of all the plays is the product of the single plays, so will be equal to the product of likelihoods associated with each outcome, each to the power of the number of plays with that outcome:

$\mathbb{P}(x_0 \textrm{ losers, } x_1 \textrm{wins on first draw, etc.} | \theta) =\prod_{i=0}^3 \mathbb{P}(\textrm{Result } i | \theta)^{x_i} .$

Without the likelihood, this is less simple, and often impossible. There’s a growing amount of research on optimal choice of summary statistic, but the chosen statistic will usually not be sufficient, and the ABC estimate will converge to the wrong value as the accept number increases. However, the increase in acceptance probability is considered worth it, as having a higher acceptance rate allows more accepts – a larger $n$ – for the same running time, which will decrease the error up to a point.

3. If any relevant element of the data is continuous, this still isn’t good enough, because our chance of getting any value for the statistic is still zero. This requires other measures, such as the concept of tolerance introduced below. Given such measures, summary statistics are still useful for the same reasons, so they are worth explaining first.

2. Tolerance

If the data is continuous, the probability of getting an exact match between datasets is zero, so at some point you have to say “close enough”. One way to do this is to decide on some way to measure the distance between two datasets, and accept the proposal if this distance is less than a certain value.

For example, say we define the distance between two datasets as the Euclidean distance, i.e. calculate the difference between each pair of respective elements, and take the square root of the sum of their squares. Then we’d accept any simulated data that lies within a ball around the observed data, with a radius equal to the tolerance. In one dimension this would be a symmetric interval, in two a circle, in three a sphere, and so on. Perhaps this would be a better place to explain the curse of dimensionality in terms of balls in boxes, but no matter.

Introducing tolerance leads to the following algorithm.

ABC3
1. Decide on the acceptance number $n$, the distance metric $\|\cdot\|$ , and the tolerance $\delta$.
2. Sample a proposal $\hat{\theta}$ from the prior $p(\theta|M)$.
3. Calculate the observed statistic $s^*=S(x^*)$.
4. Generate a dataset $\hat{x}$ from the likelihood $p(X|M,\hat{\theta} )$.
5. Calculate the statistic $\hat{s} =S(\hat{x} )$ for the generated dataset.
6. Accept $\hat{\theta}$ if $\left\|\hat{s} -s^*\right\| \leq \delta$, else reject.
7. Repeat steps 4-6 until $n$ proposals have been accepted.
8. Estimate the posterior expectation of $\theta$ as $\mathbb{E} (\theta|M,X=x^*) \simeq\frac{1} {n} \sum_{k=1}^n \hat{\theta}_k$, the mean of the accepted proposals.

The accepted proposals are now taken from the distribution $p(\theta|\|S-s^*\|\leq\delta)$, and we hope this is close enough to the true posterior to not introduce too much error.

Again, we consider the Lucky Dip example again. We choose the Euclidean distance metric, and the ABC method for approximating the number $r$ of red fish is now the following.

ABC3FISH
1. Decide on the acceptance number $n$ and the tolerance $\delta$.
2. Sample a proposal $\hat{r} \in \{0,1,2,\ldots,f\}$, with each possibility equally likely.
3. Generate a play record $\hat{x}$ of five players, where each player starts with $\hat{r}$ red fish and $f-\hat{r}$ blue fish in the tank.
4. Count the number of losers, and the number of players that win on the first, second, and third draws, to calculate the generated statistic $\hat{s}$.
5. Accept $\hat{r}$ if $\|\hat{s} -(2,0,1,2) \|\leq\delta$, else reject.
6. Repeat steps 2-5 until $n$ proposals have been accepted.
7. Estimate the posterior expectation of $r$ as $\mathbb{E} (r|M,X=(3,0,0,2,3)) \simeq\frac{1} {n} \sum_{k=1}^n \hat{r}_k$, the mean of the accepted proposals.

Here are some results, with the tolerance set to one. This means a proposal is accepted if the simulated data is equal to the observed data, or one element is off by one. The acceptance region is technically a ball, but since the data is discrete on a uniform grid, a tolerance of one results in acceptable datasets being in the shape of a cross.

Here’s another with the tolerance set to 12, which I’ll call ABC3aFISH.

1. Since the data in the example is discrete, and the space of possible statistics is small, using a non-zero tolerance is probably not justified. Still, it’s good enough to illustrate the idea.

2. ABC3FISH took an hour and three quarters, not much less than the zero-tolerance ABC2FISH algorithm. ABC3aFISH, on the other hand, took a minute and a half.

3. While ABC3FISH converges roughly to the same answer, ABC3aFISH converges to the prior estimate of $50\%$.This is because the tolerance is so high that it always accepts. Since this makes the condition $\|S-s^*\|\leq\delta$ tautological, ABC3aFISH is effectively sampling from the prior. Indeed, the leftmost box, shows individual proposals to be roughly evenly distributed across the entire range, as expected from our flat prior.

On the other hand, ABC3FISH and ABC2FISH are slower, but their sampling is closer to the posterior. Setting the tolerance is thus a balance between the number and the accuracy of accepted proposals, or between the known prior and the unknown posterior.

3. Choices of Tolerance

Now we can reduce the computation time if needs be, let’s think about how to keep our estimates accurate while doing so. Specifically, what value should we choose for the tolerance?

I mentioned at the beginning of the post that a higher acceptance rate made up for non-sufficient statistics, up to a certain point. The same is true for non-zero tolerances. As the number of proposals increases, the estimate will tend towards the incorrect posterior expectation. Decreasing the tolerance would result in the estimate tending towards a more-correct expectation, but the lower rate of acceptance would mean the convergence would be slower.

Say we draw the expected error against number of proposals – computation time – for both choices of tolerances. The curve for each would decrease as computation time increases, but flatten out at a certain level. The lower tolerance’s curve would fall more slowly at first, but eventually overtake the other tolerance’s curve as the latter flattens out, eventually flattening out at a lower level.

The lower tolerance will thus give a better estimate once the number of proposals is high enough. How large is this critical number? Who knows? Outside of simple examples – where the real answer is known, and so is the error of estimates – this is infeasible to find. Still, we can at least say that the optimal choice of tolerance decreases as the number of proposals increases.

What if we want something more specific? How about how quickly the optimal tolerance drops, or how quickly the error drops? It depends on how the error is defined, which may vary between individual problems. I’ll introduce one definition next time. The introductory overview’s done, unless I write about more complex variants of ABC some time. Things become more specific, and mathematical, from here.

# Introduction to Approximate Bayesian Computation

A paper called The Rate of Convergence of Approximate Bayesian Computation, of which I’m a co-author, went online recently. As such, I’m going to break from beginner-friendly posts for a while, and outline the subject that the paper’s about. My goal is to work up to talking about the results proved in the paper.

We’re going to assume we want to make inference about some process, using some observations $x^*$. An example of observations would be our observing the outcome for previous players in a game of Pólya’s Lucky Dip. To understand the process, we have at least one model $M$ of how the process works, which takes some model parameters $\theta$ and generates some observations $X$.

We obviously need to work out whether the model is correct, but we’d also like to work out what the model parameters are if it is correct. This would then allow us to have a more complete model of how the process works. More importantly, knowledge of the parameters would allow us to make predictions about data observed in the future. We can thus forecast future results, and also use results we observe later to decide whether the model is adequately describing reality. Compare this to just predicting the values of the model parameters, which aren’t observable, and usually don’t exist in reality.

Let’s discuss the inference of the model parameters first. Under the Bayesian framework, we say that our belief in the values of the parameters, before making observations, has the prior probability density $p(\theta|M)$. We then want to update our belief after making observations, giving the posterior density $p(\theta|M,X=x^*)$. The usual way to find this is to use Bayes’s Theorem:

$p(\theta|M,X=x^*) =p(\theta|M) \frac{\mathbb{P} (X=x^*|M,\theta) } {\mathbb{P} (X=x^*|M) } .$

One possible problem is that the scaling factor $\mathbb{P} (X=x^*|M)$ can’t be calculated exactly. There are various ways to deal with this, such as doing numerical integration on $\int p(\theta,X=x^*|M)$. I won’t discuss these any further. Another, more complicated problem is when we cannot calculate the likelihood $\mathbb{P} (X=x^*|M,\theta)$ – a measure of how likely the observed data was to occur in comparison to different possible data – or when doing so is computationally infeasible.

This occurs rather often, because, if we wish to make inferences about a process that isn’t easily simplified, the way in which data is generated by the model can be exceedingly complicated. Typical problems we’d be looking at in this context are: trying to work out when different species of animal diverged from each other by examining fossilised DNA samples; trying to work out how modern humans spread out of Africa; analysing how the SARS virus spread in Hong Kong. Messy problems, complicated problems, problems that can’t be reduced to a model with simple likelihoods without losing important properties of how the process works.

So, what can we do instead? We can try some sort of Monte Carlo method. Approximate Bayesian Computation, or ABC, is one of the more naïve methods, because it doesn’t assume anything outside of what I mentioned above: the model $M$, the prior $p(\theta|M)$, and a method to generate data from model parameters, which is described by the incalculable likelihood $p(X|M,\theta)$ for any data value $X$.

There are about as many variants of ABC as there are people using it, if not more. I’ll give a few basic versions as we go. First, I’ll outline the basic philosophy: we generate samples of the model parameters $\theta$ from the prior distribution, then put these into the model to generate a sample dataset. We then compare this sample dataset to the observed dataset to decide how to use the proposed parameter values. We do this for a bunch of generated parameter proposals. Informed by the data comparisons, we then use these proposals to make an estimate of whatever we want to know about the model parameters.

There’s a lot of leeway in that description, so here’s a very basic example:

ABC1
1. Decide on the acceptance number $n$.
2. Sample a proposal $\hat{\theta}$ from the prior $p(\theta|M)$.
3. Generate a dataset $\hat{X}$ from the likelihood $p(X|M,\hat{\theta} )$.
4. Accept $\hat{\theta}$ if the $\hat{X} =x^*$, else reject.
5. Repeat steps $2-4$ until $n$ proposals have been accepted.
6. Estimate the posterior expectation of $\theta$ as $\mathbb{E} (\theta|M,X=x^*) \simeq\frac{1} {n} \sum_{k=1}^n \hat{\theta}_k$, the mean of the accepted proposals.

Since the proposals are only accepted when the generated data is equal to the observed data, the accepted proposals are taken exactly from the posterior distribution $p(\theta|X=x^*) .$

For example, say we’re again trying to get red fish out of a tank of red and blue fish. There are $f$ fish, and our prior on the number of red fish in the tank is the flat prior $p(r)=1/(n+1)$ for $r\in\{0,1,2,\ldots,n\} .$ Each player has three attempts to draw out a red fish, before he replaces his fish and the next player has a go. We observe several players, with a play record such as, say,

$(3,1,2,1,0,1,\ldots) ,$

where $0$ indicates a player losing, and a number in $\{1,2,3\}$ indicates a player winning on that draw. Our model $M$ will be the assumption that each remaining fish in the tank has an equal chance of being drawn each time.

Let’s say our play record is $(3,0,0,2,3) .$ Then our simple ABC method for approximating the number $r$ of red fish becomes the following.

ABC1FISH
1. Decide on the acceptance number $n$.
2. Sample a proposal $\hat{r} \in \{0,1,2,\ldots,f\} ,$ with each possibility equally likely.
3. Generate a play record of five players, where each player starts with $\hat{r}$ red fish and $f-\hat{r}$ blue fish in the tank.
4. Accept $\hat{r}$ if the generated play record matches the observed record $(3,0,0,2,3)$, else reject.
5. Repeat steps $2-4$ until $n$ proposals have been accepted.
6. Estimate the posterior expectation of $r$ as $\mathbb{E} (r|M,X=(3,0,0,2,3)) \simeq\frac{1} {n} \sum_{k=1}^n \hat{r}_k$, the mean of the accepted proposals.

I took a hundred such estimates for several values of $n$. The results are given as boxplots below. As expected, the boxplots appear to be sized proportionally to $\sqrt{n}$.

1. It can take a long time to accept any proposals if the play record is large, or is a record we would consider an extremely rare occurrence under the prior. For example, even for such a seemingly simple problem as this one, the above picture took a day or two to produce, so one estimate with $n=1000$ would have taken a few hours.

This is partly because the algorithm is being overly picky about whether to accept a proposal. It wants the record to be exactly the same, with the same order, when all we want is for the frequency of each player outcome to be the same. In this case, that means two losers, no winning first draws, one winning second draw, and two winning third draws. This can be mitigated by introducing summary statistics, which I’ll talk about next time.

2. Our estimate is in the form of a point estimate rather than an interval. That means it doesn’t give us any idea of how uncertain we are about the resulting estimate. This can be mitigated by using the accepted proposals to make an estimate of the posterior density rather than the expectation. Obviously we shouldn’t just assume there’s no uncertainty at all.

3. Both of the above can lead to making an estimate from a small number of accepted proposals. This can lead to some stupid reasoning if we assume there’s no uncertainty in the estimate. For example, say our record is $(1,1,1,1,1,1) ,$ i.e. we had six players, and they all happened to win immediately. We’d expect the resulting estimate to tend towards higher number of fish. However, if our only accepted proposal happens to be $\hat{r} =1,$ our resulting estimate would be guessing the number of red fish to be as small as practically possible. A result this heinous won’t happen too much, and the chance of it happening will go down as we accept more proposals, but with a point estimate we have no idea how likely events like this are for a particular acceptance number.

4. Since the amount of proposals generated to accept $n$ of them is random, the computation time of the ABC method above is random. This is often inconvenient for practical purposes, since we might be on time constraints for coming up with an estimate. A common alteration, is to set the number $N$ of generated proposals instead, in which case $n$ is random. I won’t talk about this much, partly to avoid discussing what happens when no proposals are accepted.

5. I’ve said nothing at all about making forecasts about future observables, which is unsatisfactory for the reasons I gave in the third paragraph. This doesn’t really get done with ABC, and it’s not clear whether there’s an efficient way to do so, since the usual way to make forecasts requires knowledge of the likelihood. You can’t use all the records generated, since they’re distributed based on the prior, and you can’t use the ones associated with the accepted proposals, since they’re all equal to our observed record. That would lead us to conclude that future play records are certain to be the same as the one we observed, which is absurd. My first guess would be to take each of the accepted proposals, and generate a set number of new records for each one, but this is infeasible for the more complex problems ABC is usually used for.

To sum up, ABC is simple to use, because it doesn’t use any assumptions outside of the prior and how to generate data from parameters under the model assumptions. However, we pay for the naïvety of this method by getting a point estimate with no sense of uncertainty, and, more importantly, with its taking a long time to calculate anything. The latter is particularly heinous – if the simple problem above would take a few hours to get a decently accurate estimate, imagine how long it takes to estimate how modern humans spread out of Africa. I’ll talk about ways of cutting the computation time in the next post.

# Monte Carlo Example: Pólya’s Lucky Dip

Edit (2013/10/04): Under the first picture below, I mention a line which should be on all the pictures, but isn’t. This line should be at around 25 for all of them.

Probability has a few standard analogies. Let’s get to grips with one of them.

Let’s say we sit Ronald down in front of a bucket of red and blue magnetic toy fish, and the red fish have a prize written on them. He then catches one of the fish with a magnetic rod. It turns out to be blue, so he stores the fish by clipping it to his beard, and tries again. Assume the rather unlikely case where he knows that there are ten red fish and a hundred blue ones. What’s the chance of winning if he can catch up to three fish?

The standard mathematical way to solve this is to compare the picking-out of fish to one of the classic examples of picking balls out of an urn. But, frankly, it’s early in the morning, and I don’t want to deal with binomial coefficients before I’ve had a few drinks. What I could do with is convincing Ronald to sit around playing lucky dips all day, and see how often he wins. Since it’s a bit late to ask Ronald to fish, I’ll use a computer instead.

If we’d had Ronald to do this, I’d have to choose how many times he got to have a go. Since we’re using a computer, I’m just going to pick a few different numbers, and see what difference it makes. After a break for a drink, this is what I came back to.

The dots are the guesses, the line is what I know the true probability to be. However, if I run this again, the result can be very different.

Our guesses have a random element to them, so that shouldn’t be surprising. If I let Ronald play ten times, and then ten times again, the two sets of results needn’t have the same number of winners. What this means is that, if I make several guesses with the same number of plays, they’re going to be spread out. Since, in practice, we’re only going to make one guess, we’d like the spread to be pretty small. Hopefully, we can achieve this by increasing the number of plays.

I reworked the program to take a hundred guesses for each number of plays, and then use them to draw boxplots. After a bigger drink, I came back to this.

If you’re not used to boxplots, half of the estimates are inside the box, and the other half are inside the dotted lines. Dots are outliers that I’m just going to ignore here. The boxes get smaller as the number of entrants increases, and the box for 10000 plays is tiny. In other words, increasing the number of plays decreases how spread out the estimates will be.

What we can say is that, if you wanted your guess to be precise to within a percentage point or two, you’d need to simulate about ten thousand goes. Maybe it’s just as well we didn’t ask Ronald, I’m out of drinks to bribe him with for that long.

Does the box shrink towards the correct value? We don’t know, unless we work it out. In this case, my throat is now wet enough that I feel up to working it out on paper. In this case the true probability of winning is $\frac{82}{327}$, or about 25%, so it looks like the guesses tend towards the correct value as we increase the number of plays. It also looks like this would be a terrible lucky dip from the point of view of the person paying for the prizes, but never mind.

This is mainly because we know how to make direct guesses. By that, I mean the new data we’re generating is a direct statement about what we think the probability is. What we didn’t have to do, for example, was to be given two sets of Ronald’s win frequencies, and have to guess at whether he was fishing from the same bucket each time. We could generate more win frequencies, but those aren’t something we can directly use as a statement of whether or not the bucket is the same.

This requires more clever methods, and these more clever methods don’t necessarily tend to the correct answer if we run them for longer. That might be because the method we decide to use is very good. It might be because the data we can generate is so uninformative about the answer, that deriving one is going to introduce errors, regardless of what we do. But that deserves a separate post.

Why didn’t I just have a few drinks first, and go straight to getting the right answer? Well, I could have done, but sometimes you don’t have that option. More on that another time.