# Principal Components Analysis, Eigenfaces, and Karhunen-Loève Expansions: Part One

Last time, we were talking about interpolation and orthogonal polynomials. For this series, we’re also going to end up talking about orthogonal vectors and orthogonal functions. But first, we’ll look at some old photos from the Seventies.

Principal Components Analysis
The Budapest National Gallery has its floors arranged in chronological order, so the first floor is given over to an overwhelming sea of medieval paintings of Bible stories and prophets. Well-painted as they are, when the founder of a country is a King Saint, you know you’re in for the long haul on this floor. However, the visitor who hasn’t run out of patience before they reach the floors for the 20th Century will find a cute pair of photographs by Péter Türk.

One is a collation of shots of Türk’s class. For the second, he chopped each of the shots into a grid of squares. The top-left squares from each shot have then been placed together in one large square. The squares of the next grid place along have then been placed alongside, and so on. The result is something that, from a distance, looks like a new face.

Class Average I, Péter Türk, 1979.

Class Average II, Péter Türk, 1979.

Today, this would be done by computer — a mechanical Türk, so to speak — and the squares would be blended together instead of being placed in blocks, but what we have here is some idea of a “mean face”, the average of all the individual photos. It’s not exactly like any of the individual photos, but clearly shows a lot of their shared features. Especially the hair. If we let the computer take the mean over each pixel position, rather than placing them next to each other, we obtain something that looks surprisingly human.

The true mean face, or as close as we can get with low-quality online scans of the originals.

Class Average II rescaled, for comparison.

It’s worth mentioning — not for the last time — how good a job Türk did at keeping his photographs consistent, with regards to face positioning. This is a small group of people, yes, but if we take the eyes as an example, the eye positions are so similar that the mean face still has pupils!

This tinkering is well and good, but consider what we might want to do with a large collection of such photos.
1. Say we wanted to make a digital database with thousands of these photos. Maybe we want to set up some sort of photographic identification system, where we compare a new photo, of someone requesting access to something, against the photos we have on file. How would we decide if the new photo was similar enough to one on the database to grant them access?
2. Along similar lines, suppose we’re not interested specifically in the people in the photos, but we are interested in finding some rules of thumb for telling between different people in the same population. How would we do so?
3. Now suppose we’d also like to compress the photos somehow, to save on memory. How should we do so while keeping the photos as distinguishable as possible?

One method we can use for this is Principal Components Analysis, which I’ll be talking about over the next few posts. However, here’s a brief taste of what it allows us to do, statistically rather than by guesswork:
1. PCA gives us a way to take a new photo, and make some measure of its “distance” from our originals. We can then decide that it’s a photo of the person in the closest original, or that it’s a new person if all the “distances” are too large.
2. The most important features for distinguishing between people in the above set of photos are the side of their face the light source is on, and how far down their fringe comes.
3. We can compress the photos in such a way that we know how much of the original variance, or distinctiveness between the photos, that we keep. If we don’t mind compressing by different amounts for different photos, we could also keep a set amount of distinctiveness for each photo, rather than across the whole group.
4. We can try — with variable levels of success — to generate new faces from scratch.

Also worth noting is that, apart from 4., all of this can be done with only a covariance estimate: we make no assumptions about the distribution the photos are drawn from.

We’ll come back to these photos, and these applications, later in the series. Next time, we’ll look at something a bit simpler first.