Let’s try this blog thing again

I haven’t posted anything for quite a while — the thesis wasn’t going to write itself — so I’m going to make another stab at this. In the meantime, some short observations:

1. Looking back at old posts, I probably stopped writing because I was worrying too much about writing rigorously enough. That’s not stall what I do in person, so I’ll try to loosen the writing up a bit. I might then finish the next PCA post some time in the next two years.

2. There’s a decent backlog of mathematical ideas I can write about. I went on an input-output economics and central place theory kick for a little while. The short postdoc I’ve been doing has introduced me to some aspects of applied statistics, such as sensitivity analysis, and how irritating MCMC is to tune. The thesis is done and will soon be freely available online, so I can try to write about that a bit too.

Advertisements

In Draft This Week

It’s been a while since I’ve posted anything, so here are a few quick notes about things I’ve been learning and working on. I’ll probably write about these fairly soon.

1. Orthogonal polynomials are helpful when looking at Gaussian quadratures, an elegant extension of the standard quadratures I was taught. Not quite statistics, but approximation theory is close enough.

2. Karhunen-Loeve expansions, which can be thought of as the equivalent of Principal Component Analysis for continuous stochastic processes. Instead of finding the eigenvectors of a covariance matrix, in order to put data along independent axes ordered by the amount of variance they account for, you’re finding the eigenfunctions of an autocovariance function, with independent random weights. This leads to slightly strange formulations, where Brownian motion, a process with no derivatives, can be written as an infinite sum of sine functions, that has infinitely-many derivatives but almost surely converges to the Brownian motion everywhere.

3. I’ve tried adding another distributions to the R script for plotting ABC posteriors. Specifically, I’ve added the case with a Possion likelihood, since this is the only other one that can be done without adding distributions from other packages. The sliders I’m currently using make moving around discrete distributions rather odd to work with, though, so a new version’s on hold until I get around to adding options for other continuous cases.

4. The series on our paper from last year hit a roadblock when I started fussing over how much detail to give concerning asymptotics and Taylor’s theorem. The answer for the latter is probably a lot less than I’m trying to do, so once I’ve done a few quick posts on the topics above, I’ll get back to finishing it off.

Introduction

During my maths undergrad degree, I did no Statistics, only Probability. This led to a few conversations with medics who were horrified I made it through my degree without knowing what a p-value is. I’m now a year into a Statistics research postgrad. I know what a p-value is, even if I’d rather never use one.

This blog will contain occasional ramblings on Statistics. I’ll try and make it as jargon-free as I can. That might fall down once I have to explain what a density function is. For those familiar with Statistics, be aware that my lack of undergrad training in it has resulted in my coming into it from the Bayesian viewpoint, and my knowledge of classical methodology is extremely limited.

The first post will be a simple overview of what my research area, Monte Carlo methods, is all about.