Gaussian Quadratures

Last time I talked about quadratures and orthogonal polynomials, so now it’s time to combine the two.

Theorem: A quadrature scheme with n interpolation points will be exact for polynomials of order up to 2n-1 if the quadrature points are chosen to be the roots of the associated nth-order orthogonal polynomial.

Note that, for an arbitrary choice of sample points, we would generally expect n points to give a quadrature for polynomials of order n-1, so 2n-1 is a substantial improvement. For example, a single interpolation point can exactly integrate over polynomials of order one rather than just zero — as we had in our original example — and two interpolation points can integrate over polynomials of order three rather than one, even better than our original example. Three points can integrate up to order five!

Proof: We can split any (2n-1)-order polynomial P_{2n-1}(x) according to the orthogonal polynomial f_n(x). In particular, we can use the decomposition

P_{2n-1}(x) =P_{n-1}(x)f_n(x)+Q_{n-1}(x),

where P_{n-1} and Q_{n-1} are polynomials of order n-1. Since f_n is predetermined, we must specify a (2n-1)-order polynomial, with 2n coefficients, by choosing the polynomials P_{n-1} and Q_{n-1}, with 2n coefficients between them. We have as many free coefficients to choose as we need, so this decomposition is always possible.

When we integrate over this expression, we see that

\int_a^b P_{2n-1}(x)\, \textrm{d}x =\int_a^b (P_{n-1}(x) f_n(x)+Q_{n-1}(x) ) \,\textrm{d}x =\int_a^b Q_{n-1}(x) \,\textrm{d}x ,

by f_n(x) being orthogonal to any P_{n-1}, regardless of our choice of integration points. We therefore have the equality

\int_a^b P_{2n-1}(x)\, \textrm{d}x =\int_a^b Q_{n-1}(x) \,\textrm{d}x = \sum_{k=1}^n \omega_k Q_{n-1}(x_k) ,

so we must show that \sum_{k=1}^n \omega_k P_{2n-1}(x_k) is equal to any one of these expressions. But, if we take x_k to be the roots of f_n(x) , then equality to the third expression follows immediately, since the first term in the decomposition of P_{2n-1}(x_k) disappears for every x_k, and so P_{2n-1}(x_k) = Q_{n-1}(x_k). \, \square

We can, therefore, choose our interpolation points to be the roots of an orthogonal polynomial to optimise our quadrature method, with respect to the order of polynomials whose integrals are computed exactly. For our case over the interval [-1,1], this gives the first optimal sets of interpolation points as

\left\{0\right\}, \, \left\{-\frac{1}{\sqrt{3}} ,\frac{1}{\sqrt{3}}\right\}, \, \left\{-\sqrt{\frac{3}{5}} ,0,\sqrt{\frac{3}{5}} \right\}, \ldots

Weighted Quadratures

As a final extension, suppose the function we are integrating is not polynomial, but it is a polynomial times some other function — w(x), say — that is explicitly known. In this case, we wish to estimate

\int_a^b w(x)f(x)\, \textrm{d}x.

For this case, we can proceed as before, except that now we choose polynomials f_i(x) that are orthogonal, in the sense that

\langle f_i,f_j\rangle = \int_a^b w(x)f_i(x)f_j(x)\, \textrm{d}x = 0.

In other words, the known function w(x) becomes a weight function inside the integral, changing our choice of orthogonal polynomials, and of interpolation points. Common examples on the space [0,1] are the Jacobi polynomials for w(x) = (1-x)^{\alpha}(1+x)^{\beta}, and the Chebyshev polynomials of first type for w(x) = 1/\sqrt{1-x^2}.

That’s about all I have to say. Gaussian quadratures are a reasonably straightforward step-up from basic quadratures, but give a sizeable increase in efficiency if we’re dealing with some function of known type, with an unknown polynomial term.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s