If the random variable has a normal distribution with mean 0 and variance 1, we know that its tail probability is given by the integral
(that is, the area under a standard normal bell-shaped curve from up to ).
This integral does not have a closed form in terms of elementary functions (unless ). However we can find a good lower bound as
We show the proof of this result based on . Interestingly it involves Jensen’s inequality: recall that this states that if is a convex function and denotes expectation with respect to some probability distribution, then for any random variable . Applied to the convex function this becomes
For fixed we now let have the distribution
Applying this in (2) gives
Evaluating these terms,
so (4) becomes
This is a quadratic inequality in being of the form
where and .
This is equivalent to , or
since the quantities involved are non-negative. In other words we have
which is equivalent to (1), as desired.
Another lower and upper bounds for the tail probability of a normal distribution are
a proof of which can be seen, e.g. in .
The bound we have looked at can be combined with the following upper bound to arrive at the following tighter bounds.
See  and  for a derivation of the upper bound as well as similar bounds.
 Z. W. Birnbaum, An Inequality for Mill’s Ratio, Ann. Math. Statist. Volume 13, Number 2 (1942), 245-246.
 J. D. Cook, Upper and lower bounds for the normal distribution function, 2009.
 L. Duembgen, Bounding Standard Gaussian Tail Probabilities, University of Bern Technical Report 76, 2010