# Chaitanya's Random Pages

## October 14, 2010

### My Six Favourite Formulas – #5

Filed under: mathematics — ckrao @ 10:08 am

Back to my favourite formulas, we look now at the one I would use more frequently than the others, the Gaussian integral:

$\displaystyle \int_{-\infty}^{\infty} e^{-x^2}\ dx = \sqrt{\pi}$

This integral of a bell-shaped function was first found by Laplace in 1782. Early the next century Gauss used the  normal distribution (having density function of the form $\displaystyle ke^{-x^2}$)  in the context that if measurement errors obey this distribution, then the arithmetic mean of measurements is the optimal estimator in a least squares sense.

The most popular proof of the Gaussian integral is to express its square as a double integral (method due to Poisson) and then change to polar coordinates. If $I$ denotes the integral, we have

$\displaystyle I^2 = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-(x^2 + y^2)}\ dxdy.$

Now introduce the change of variables $(x,y) = (r \cos \theta, r \sin \theta), r > 0, 0 < \theta < 2\pi$ and

$\begin{array}{lcl} dx dy & = & d(r \cos \theta) \wedge d(r \sin \theta)\\&=& (dr \cos \theta - r \sin \theta \ d\theta) \wedge (dr \sin \theta + r \cos \theta \ d \theta)\\&=&r \cos^2 \theta \ dr \wedge d\theta - r \sin^2 \theta \ d\theta \wedge dr\\&=&r(\cos^2 \theta + \sin^2 \theta)dr \wedge d\theta\\&=& r dr d\theta.\end{array}$

(Here we are using the language of differential forms to evaluate the Jacobian determinant when changing variables, also used in a previous post here and alluded to here when first explaining differential forms in the generalised Stokes Theorem.)

Then

$\begin{array}{lcl}I^2 &=& \int_0^{2\pi} \int_0^{\infty} r e^{-r^2}\ dr d\theta\\&=& \int_0^{2\pi}\left[-\frac{1}{2}e^{-r^2} \right]_0^{\infty}\ d\theta\\&=&\int_0^{2\pi}\frac{1}{2}\ d\theta\\&=&\pi,\end{array}$

leading to the desired result. I have skipped the details related to the integral being improper, but that can be easily treated through limits.

To this day I find this result and derivation remarkable. The $\pi$ comes in the answer because the Gaussian distribution is rotationally invariant. This distribution pops up in many applications ranging from modelling errors and noise (due to the central limit theorem) to the solution of the heat equation in physics.