Chaitanya's Random Pages

July 8, 2020

The largest parallelogram in a triangle

Filed under: mathematics — ckrao @ 11:59 pm

In this post we find the largest parallelogram, rhombus, rectangle and square that can be contained in a given triangle. We will see that in the first three cases we can achieve half the area of the triangle but no more, while it is generally less than this for a square.

Inscribing a parallelogram, rhombus, rectangle and square of maximum area in a triangle

1. The largest parallelogram inside a triangle

It can be readily seen that one can obtain a parallelogram having half the area of a triangle by connecting a vertex with the three midpoints of the sides. (This has half the base length of the triangle and half its height.)

Is it possible to obtain a larger parallelogram? As outlined in [1], if two or fewer vertices of the parallelogram are on sides of the triangle, a smaller similar triangle can be created by drawing a line parallel to the triangle’s side through a vertex of the parallelogram that is interior triangle. (This is done three times in the figure below.) This reduces the problem to the next case.

A smaller similar triangle containing the parallelogram can be created by lines through its vertices parallel to the sides of the triangle

We are left to consider the case where three or more vertices of the parallelogram on the triangle. We can draw a line from a vertex to the opposite side parallel to a pair of sides (in the figure below AH is drawn parallel to DG), thus dissecting the triangle into two. Each of the smaller triangles then has an inscribed parallelogram where two of the vertices are on a side of each triangle. Then by drawing lines parallel to sides if required, we create two sub-problems each having four vertices of the parallelogram on the sides of the triangle.

Dissecting a triangle so that each smaller triangle has a parallelogram with two vertices on a side. The right parallelogram is contained in a larger one HIEJ formed by lines parallel to the sides.

Finally, if all four vertices of the parallelogram are on sides of the triangle, by the pigeonhole principle, two of them are on a side (say BC as shown in the figure below). In this figure, if we let AD/AB = k and the height of ABC from BC be h_a, then by the similarity of triangles ADE and ABC, DE = k.BC and \triangle ADE has height kh_a. Then the area of the parallelogram DEFG is DE \times (1-k)h_a = k(1-k) DE.h_a which is 2k(1-k) times the area of \triangle ABC. This quantity has maximum value 1/2 when k=1/2 so we conclude that the parallelogram does not exceed half the triangle’s area.

A parallelogram with all four vertices on sides of a triangle must have two of them on one side.

2. The largest rhombus in a triangle

Constraining sides of the parallelogram to be equal (forming a rhombus), we claim that the largest rhombus that can be inscribed in a triangle is also half its area. This can be formed with two of the vertices on the second longest side of the triangle. Suppose a \geq b \geq c are the sides of the triangle with b=AC the second longest side length. Then let the segment MN joining the midpoints of AB and BC form one side of the rhombus of length b/2. It remains to be shown that there exist parallel segments of this same length from M, N to AC. The longest possible such segment has length NC = a/2 and the shortest has length h_b/2, half the length of the altitude of \triangle ABC from B. We wish to show that h_b/2 \leq b/2 \leq a/2. This follows from

\displaystyle h_b = c \sin A \leq c \leq b \leq a.

Inscribing a rhombus of maximal area in a triangle

The area of this rhombus is clearly half the area of the triangle as it has half the length of its base and half the height.

3. The largest rectangle in a triangle

Here if BC is the longest side of the triangle we form the rectangle from midpoints D, E of AB and AC respectively, dropping perpendiculars onto BC forming rectangle DEFG:

The largest rectangle inscribed in a triangle, where D and E are midpoints

The area of this rectangle is half the area of the triangle as it has half the length of its base and half the height.

Interestingly the reflections of the vertices of the triangle in the sides of the rectangle coincide, showing a paper folding interpretation of this result [2].

Reflecting A, B and C in DE, DG and EF respectively yields the same image, yielding a paper folding interpretation

Since rhombuses and rectangles are special cases of parallelograms and we found that inscribed parallelograms in a triangle occupy no more than half its area, the rhombus and rectangle constructions here are optimal.

4. The largest square in a triangle

Here we shall see that the best we can do may not be half the area of the triangle. As before, if two or fewer vertices of the square are not on the sides of the triangle it is possible to scale up the square (or scale down the triangle) so that three of the square’s vertices are on the sides. We claim that the largest square must have two of its vertices on a side of the triangle. Suppose this is not the case and we have the figure below.

Squares with a vertex on each side of \triangle ABC pivot about the point P

Consider squares LMNO inscribed in \triangle ABC so that one vertex L is on AB, M is on BC and O is on CA. We claim that the largest such square is either DEFG (two vertices on BC) or HIJK (two vertices on AC). Suppose on the contrary that neither of these squares is the largest. Then we make use of the fact that all 90-45-45 triangles LMO inscribed in \triangle ABC have a common pivot point P. This is the point at the intersection of the circumcircles of triangles OAL, LBM and MCO. To show these circles intersect at a single point, we can prove that if the circumcircles of triangles OAL and LBM intersect at P then the points O,P,M,C are cyclic by the following equality:

\displaystyle \begin{aligned} \angle MPO &= 360^{\circ} - \angle LPM - \angle OPL\\ &= (180^{\circ} - \angle LPM) + (180^{\circ} - \angle OPL)\\ &= \angle B + \angle A \\ &= 180^{\circ} - \angle C,\end{aligned}

where the second last equality makes use of quadrilaterals BMPL, ALPO being cyclic.

Additionally we have

\displaystyle \begin{aligned}\angle BPC &= \angle BPM + \angle MPC\\ &= \angle BLM + \angle MOC\\ &= (\angle LAM + \angle AML) + (\angle MAO + \angle OMA)\\&=(\angle LAM + \angle MAO ) + (\angle AML + \angle OMA )\\&= \angle BAC + \angle OML\\&=\angle A + 45^{\circ}, \end{aligned}

with similar relations for \angle APB and \angle CPA. Hence P is the unique point satisfying

\displaystyle \begin{aligned}\angle APB &= \angle ACB + 90^{\circ}\\\angle BPC &= \angle BAC + 45^{\circ}\\\angle CPA &= \angle CBA +45^{\circ}.\end{aligned}

(Each equation defines a circular arc, they intersect at a single point. Note that P may be outside triangle ABC.) This point is the centre of spiral similarity of 90-45-45 triangles LMO with L, M, O respectively on the sides AB, BC, CA of the triangle. Consider the locus of the points of the square as L, M, O vary on straight line segments pivoting about P. It follows that the fourth point of the square N also traces a line segment, between the points F and J so as to be contained within the triangle.

As the side length of the square is proportional to the distance of a vertex to its pivot point, the largest square will be where NP is maximised. We have seen that the point N varies along a line segment, so NP will be maximised at one of the extreme points – either when N=F or N=J. We therefore conclude that the largest square inside a triangle will have two points on a side.

If the triangle is acute-angled, by calculating double the area of the triangle in two ways, the side length x of a square on the side of length a with altitude h_a is derived as

\displaystyle \begin{aligned}ah_a &= 2x^2 + (a-x)x + x(h_a-x)\\&=2x^2 + ax - x^2 + xh_a - x^2\\&= ax + xh_a\\Rightarrow \quad x &=  \frac{ah_a}{a + h_a}.\end{aligned}

If the triangle is obtuse-angled, the square erected on a side may not touch both of the other two sides. In the figure below the side length of square BDEF is the same as if A were moved to G, where \triangle GBC is right-angled. In this case the square’s side length is BC.BG/(BC + BG).

The square erected on side AC when \angle ABC is obtuse

The largest square erected on a side may be constructed using the following beautiful construction [2]: simply erect a square CBDE external to the side BC and find the intersection points F = AD \cap BC, G = AE \cap BC.

These points define the base of the square to be inscribed since by similar triangles

\displaystyle \frac{IF}{BD} = \frac{AF}{AD} = \frac{FG}{DE} = h_a/(a + h_a)

so that

\displaystyle IF = FG = \frac{ah_a}{a + h_a}.

One can use this interactive demo to view the largest square in any given triangle. One needs to find the largest of the three possibilities of the largest square erected on each side. In the acute-angled-triangle case, the largest square is on the side that minimises the sum of that side length and its corresponding perpendicular height – as their product is fixed as twice the triangle’s area, this will occur when the side and height have minimal difference. For a right-angled triangle with legs a, b and hypotenuse c, we wish to compare the quantities (a+b) and (c+h), the two possible sums of the base and height of the triangle. We always have a + b - c = 2(s-c) = 2r < h because the diameter of the incircle of the triangle is shorter than the altitude from the hypotenuse (i.e. the incircle is inside the triangle). We conclude that the largest square in a right-angled triangle is constructed on its two legs rather than its hypotenuse.


[1] I. Niven, Maxima and Minima without Calculus, The Mathematical Association of America, 1981.

[2] M. Gardner, Some Surprising Theorems About Rectangles in Triangles, Math Horizons, Vol. 5, No. 1 (September 1997), pp. 18-22.

[3] Jaime Rangel-Mondragon “Largest Square inside a Triangle” Wolfram Demonstrations Project Published: March 7 2011

December 7, 2019

Coordinates of special points of the 3-4-5 triangle

Filed under: mathematics — ckrao @ 3:40 am

One thing I observed is that the 3-4-5 triangle is rather attractive in solving problems using coordinates. If the vertices are placed at (0,3), (0,0) and (4,0) the following are the coordinates of points and equations of some lines of interest.

Line AC: x/4 + y/3 = 1

Incentre: (1, 1)

Centroid: (4/3, 1)

Circumcentre: (2, 1.5)

Orthocentre: (0, 0)

Nine-point centre: (1, 3/4) (midpoint of the midpoints of AB and BC)

Angle bisectors: y = x, y=-2x +3, y=4/3-x/3

Ex-centres (intersection of internal and external bisectors): (3,- 3), (6, 6), (-2, 2)


Lines joining the excentres (in red above): y=-x, y=x/2 +3, y = 3(x-4)

Altitude to the hypotenuse: y = 4x/3

Euler line: y=3x/4

Foot of altitude to the hypotenuse: (36/25, 48/25) (where x/4 + y/3 = 1 intersects y=4x/3)

Symmedian point (midpoint of the altitude to the hypotenuse [1]): (18/25, 24/25)

Contact points of incircle and triangle: (1,0), (0,1), (8/5, 9/5)

Gergonne point (intersection of Cevians that pass through the contact points of the incircle and triangle = the intersection of y=3-3x and y=1-x/4): (8/11, 9/11)

Nagel point (intersection of Cevians that pass through the contact points of the ex-circles and triangle = the intersection of y=3-x and y=2-x/2: (2,1)


[1] Weisstein, Eric W. “Symmedian Point.” From MathWorld–A Wolfram Web Resource.

April 1, 2018

A collection of binary grid counting problems

Filed under: mathematics — ckrao @ 3:52 am

The number of ways of colouring an m by n grid one of two colours without restriction is 2^{mn}. The following examples show what happens when varying restrictions are placed on the colouring.

Example 1: The number of ways of colouring an m by n grid black or white so that there is an even number of 1s in each row and column is

\displaystyle 2^{(m-1)(n-1)}.

Proof: The first m-1 rows and n-1 columns may be coloured arbitarily. This then uniquely determines how the bottom row and rightmost column are coloured (restoring even parity). The bottom right square will be black if and only if the number of black squares in the remainder of the grid is odd, hence this is also uniquely determined by the first m-1 rows and n-1 columns. Details are also given here.

Example 2: The number of ways of colouring an m by n grid black or white so that every 2 by 2 square has an odd number (1 or 3) of black squares is

\displaystyle 2^{m+n-1}.

Proof: First colour the first row and first column arbitarily (there are m+n-1 such squares each with 2 possibilities). This uniquely determines how the rest of the grid must be coloured by considering the colouring of adjacent squares above and to the left.

By the same argument, the above is the same as the number of colouring an m by n grid black or white so that every 2 by 2 square has an even number (0, 2 or 4) of black squares.

Example 3: The number of ways of colouring an m by n grid black or white so that every 2 by 2 square has two of each type is

\displaystyle 2^m + 2^n - 2.

Proof: If there are two adjacent squares of the same colour with one above the other, the remaining squares of the corresponding two rows are uniquely determined as being the same alternating between black and white. The remainder of the grid is then determined by the colouring of first column (2^m - 2 possibilities where we omit the two cases of alternating colours down the first column). Such a grid cannot have two horizontally adjacent squares of the same colour. By a similar argument a colouring that has two adjacent colours with one left of the other can be done in 2^n-2 ways. Finally we have the two additional configurations where there are no adjacent squares of the same colour, which is uniquely determined by the colour of the top left square. Hence in total we have (2^m-2) + (2^n-2) + 2 = 2^m + 2^n - 2 possible colourings.

This question for m = n = 8 was in the 2017 Australian Mathematics Competition and the general solution is also discussed here.

Example 4: The number of ways of colouring an m by n grid black or white so that each row and each column contain at least one black square is (OEIS A183109)

\displaystyle \sum_{j=0}^m (-1)^j \binom{m}{j} (2^{m-j}-1)^n.

Proof: First we count the number of colourings where a fixed subset of j columns is entirely white and each row has at least one black square. The remaining m-j columns and n rows can be coloured in (2^{m-j}-1)^n ways. To count colourings where each column has at least one black square we apply the principle of inclusion-exclusion and arrive at the above result.

Another inclusion-exclusion example shown here counts the number of 3 by 3 black/white grids in which there is no 2 by 2 black square. The answer is 417 with more terms for n by n grids in OEIS A139810.

Example 5: Suppose we wish to count the number of colourings of an m by n grid in which row i has k_i black squares and column j has l_j black squares (i = 1, 2, \ldots m, j = 1, 2, \ldots, n). Following [1], the number of ways this can be done is the coefficient of x_1^{k_1}x_2^{k_2} \ldots x_m^{k_m}y_1^{l_1}y_2^{l_2}\ldots y_n^{l_n} in the polynomial

\displaystyle \prod_{i=1}^m \prod_{j=1}^n (1 + x_i y_j).

To see this note that expanding the product gives products of terms of the form (x_i y_j) where such a term included corresponds to the i‘th row and jth column being coloured black. Hence the coefficient of x_1^{k_1}x_2^{k_2} \ldots x_m^{k_m}y_1^{l_1}y_2^{l_2}\ldots y_n^{l_n} is the number of ways in which the system \sum_{j=1}^n a_{ij} = k_i, \sum_{i=1}^m a_{ij} = l_j has a solution (i = 1, 2, \ldots m, j = 1, 2, \ldots, n) for a_{ij} equal to 1 if and only if row i and column j are coloured black and 0 otherwise.

Let us evaluate this in the special case of 2 black squares in every row and every column for an n by n square grid (i.e. k_i = l_j = 2 and m = n). Picking two squares in each column to colour black means viewing the expansion as a polynomial in y_1, \ldots, y_n the coefficient of y_1^2y_2^2\ldots y_n^2 has sums of products of n terms of the form x_ix_j. Then using [] notation to denote the coefficient of an expression, we have

\begin{aligned} \left[x_1^2x_2^2 \ldots x_n^2y_1^2y_2^2\ldots y_n^2 \right]  \prod_{i=1}^n \prod_{j=1}^n (1 + x_i y_j) &= \left[x_1^2x_2^2 \ldots x_n^2 \right] \left( \sum_{i=1}^n\sum_{j=i+1}^n x_i x_j \right)^n\\&= \left[x_1^2x_2^2 \ldots x_n^2 \right] 2^{-n} \left( \left( \sum_{i=1}^n x_i\right)^2 - \sum_{i=1}^n x_i^2 \right)^n\\ &= \left[x_1^2x_2^2 \ldots x_n^2 \right] 2^{-n} \sum_{k=0}^n (-1)^k \binom{n}{k} \left( \sum_{i=1}^n x_i^2 \right)^k\left(\sum_{i=1}^n x_i\right)^{2(n-k)}\\ &=  2^{-n}  \sum_{k=0}^n (-1)^k \binom{n}{k} \frac{n!}{(n-k)!} \frac{(2n-2k)!}{2^{n-k}}\\ &= 4^{-n}  \sum_{k=0}^n (-1)^k \binom{n}{k}^2 2^k  (2n-2k)!. \end{aligned}

Here the second last line follows from considering the number of ways that products of k terms of the form x_i^2 arise in the product \left( \sum_{i=1}^n x_i^2 \right)^k (which is \frac{n!}{(n-k)!}) and products of (n-k) terms of the form x_i^2 can be formed in the product \left(\sum_{i=1}^n x_i\right)^{2(n-k)} (which is \frac{(2n-2k)!}{2^{n-k}}).

For example, when n=4 this is equivalent to finding the coefficient of a^2b^2c^2d^2 in (ab + bc + ac + bc + bd + cd)^4. Products are either paired up in complementary ways such as in (3 \times \binom{4}{2} = 18 ways) or we have the three products,, (3 \times 4! = 72 ways). This gives us a total of 90 (this question appeared in the 1992 Australian Mathematics Competition). More terms of the sequence are found in OEIS A001499 and the 6 by 4 case (colouring two shaded squares in each row and three in each column in 1860 ways) appeared in the 2007 AIME I (see Solution 7 here).

Example 6: If we wish to count the number of grid configurations in which reflections or rotations are considered equivalent, we may make use of Burnside’s lemma that the number of orbits of a group is the average number of points fixed by an element of the group. For example, to find the number of configurations of 2 by 2 grids up to rotational symmetry, we consider the cyclic group C_4. For quarter turns there are 2^4 configurations fixed (a quadrant determines the colouring of the remainder of the grid) while for half turns there are 2^8 configurations as one half determines the colouring of the other half. This gives us an answer of

\displaystyle \frac{2^{16} + 2.2^4 + 2^8}{4} = 16456,

which is part of OEIS A047937. If reflections are also considered equivalent we need to consider the dihedral group D_4 and we arrive at the sequence in OEIS A054247.

If we want to count the number of 3 by 3 grids with four black squares up to equivalence, this is equivalent to the number of full noughts and crosses configurations. A nice video by James Grime explaining this is here (the answer is 23).

Example 7: The number of ways of colouring an m by n grid black or white so that the regions form 2 by 1 dominoes has the amazing form

\displaystyle 2^{mn/2} \prod_{j=1}^{\lceil m/2 \rceil} \prod_{k=1}^{\lceil n/2 \rceil} \left(4 \cos^2 \frac{\pi j}{m+1} + 4 \cos^2 \frac{\pi k}{n+1}\right).

For example, the 36 ways of tiling a 4 by 4 grid are given here. A proof of the above formula using the Pfaffian of the adjacency matrix of the corresponding grid graph is given in chapter 10 of [2].



[1] L. Comtet, Advanced Combinatorics: The Art of Finite and Infinite Expansions (pp 235-6), D. Reidel Publishing Company, 1974.

[2] M. Aigner, A Course in Enumeration, Springer, 2007.

September 8, 2017

Notes on von Neumann’s algebra formulation of Quantum Mechanics

Filed under: mathematics,science — ckrao @ 9:49 pm

The Hilbert space formulation of (non-relativistic) quantum mechanics is one of the great achievements of mathematical physics. Typically in undergraduate physics courses it is introduced as a set of postulates (e.g. the Dirac-von Neumann axioms) and hard to motivate without some knowledge of functional analysis or at least probability theory.  Some of that motivation and the connection with probability theory is summarised in the notes here – in fact it can be said that quantum mechanics is essentially non-commutative probability theory [2]. Furthermore having an algebraic point of view seems to provide a unified picture of classical and quantum mechanics.

The important difference between classical and quantum mechanics is that in the latter, the order in which measurements are taken sometimes matters. This is because obtaining the value of one measurement can disturb the system of interest to the extent that a consistently precise value of the other cannot be found. A famous example is position and momentum of a quantum particle – the Heisenberg uncertainty relation states that the product of their uncertainties (variances) in measurement is strictly greater than zero.

If measurements are treated as real-valued functions of the state space of system, we will not be able to capture the fact that the measurements do not commute. Since linear operators (e.g. matrices) do not commute in general, we use algebras of operators instead. We make use of the spectral theory leading from a special class of algebras with norm and adjoint known as von Neumann algebras which in turn are a special case of C*-algebras. The spectrum of an operator A is the set of numbers \lambda for which (A-\lambda I) does not have an inverse. Self-adjoint operators have a real spectrum and will represent the set of values that an observable (a physical variable that can be measured) can take. Hence we have this correspondence between self-adjoint operators and observables.

By the Gelfand-Naimark theorem C*-algebras can be represented as bounded operators on a Hilbert space {\cal H}. See Section II.6.4 of [3] for proof details. If the C*-algebra is commutative the representation is as continuous functions on a locally compact Hausdorff space that vanish at infinity. Furthermore we assume the C*-algebra and corresponding Hilbert space are separable, meaning the space contains a countable dense subset (analogous to how the subset of rationals are dense in the set of real numbers). This ensures that the Stone-von Neumann theorem holds which was used to show that the Heisenberg and Schrödinger pictures of quantum physics are equivalent [see pp7-8 here].

The link between C*-algebras and Hilbert spaces is made via the notion of a state which is a positive linear functional on the algebra of norm 1. A state evaluated on a self-adjoint operator outputs a real number that will represent the expected value of the observable corresponding to that operator. Note that it is impossible to have two different states that have the same expected values across over observables. A state \omega is called pure if it is an extreme point on the boundary of the (convex) space of states. In other words, we cannot write a pure state \omega as \omega = \lambda \omega_1 + (1-\lambda) \omega_2 where \omega_1 \neq \omega_2 are states and 0 < \lambda < 1). A state that is not pure is called mixed.

Now referring to a Hilbert space {\cal H}, for any mapping \Phi of bounded operators B({\cal H}) to expectation values such that

  1. \Phi(I) = 1 (it makes sense that the identity should have expectation value 1),
  2. self-adjoint operators are mapped to real numbers with positive operators (those with positive spectrum) mapped to positive numbers and
  3. \Phi is continuous with respect to the strong convergence in B({\cal H}) – i.e. if \lVert A_n \psi - A \psi \rVert \rightarrow 0 for all \psi \in H, then \Phi (A_n) \rightarrow \Phi (A),

then there is a is a unique self-adjoint non-negative trace-one operator \rho (known as a density matrix) such that \Phi (A) = \text{trace}(\rho A) for all A \in B(H) (see [1] Proposition 19.9). (The trace of an operator A is defined as \sum_k \langle e_k, Ae_k \rangle where \{e_k \} is an orthonormal basis in the separable Hilbert space – in the finite dimensional case it is the sum of the operator’s eigenvalues.) Hence states are represented by positive self-adjoint operators with trace 1. Such operators are compact and so have a countable orthonormal basis of eigenvectors.

When \rho corresponds to a projection operator onto a one-dimensional subspace it has the form \rho = vv^* where v \in {\cal H} and \lVert v \rVert = 1. In this case we can show \text{trace}(\rho A) = \langle v, Av \rangle = v^*Av, which recovers the alternative view that unit vectors of {\cal H} correspond to states (known as vector states) so that the expected value of an observable corresponding to the operator A is \langle v, Av \rangle. This is done by choosing the orthonormal basis \{e_k \} where e_1 = v and computing

\begin{aligned} \text{trace}(\rho A) &= \sum_k \langle e_k, vv^*Ae_k \rangle\\ &= \sum_k e_k^* v v^* Ae_k\\ &= e_1^* e_1 e_1^*Ae_1 \quad \text{ (as }e_k^*v = \langle e_k, v \rangle = 0\text{ for } k > 1\text{)}\\ &= e_1^*Ae_1\\ &= \langle v, Av \rangle. \end{aligned}

Trace-one operators \rho can be written as a convex combination of rank one projection operators: \rho = \sum \lambda_k v_k v_k^*. From this it can be shown that those density operators which cannot be written as a convex combination of other states (called pure states) are precisely those of the form \rho = vv^*. Hence vector states and pure states are equivalent notions. Mixed states can be interpreted as a probabilistic mixture (convex combination) of pure states.

Let us now look at the similarity with probability theory. A measure space is a triple (X, {\cal S}, \mu) where X is a set, {\cal S} is a collection of measurable subsets of X called a \sigma-algebra and \mu:{\cal S} \rightarrow \mathbb{R} \cup \infty is a \sigma-additive measure. If g is a non-negative integrable function with \int g \ d\mu = 1 it is called a density function and then we can define a probability measure p_g:{\cal S} \rightarrow [0,1] by

\displaystyle p_g(S) = \int_S  g\ d\mu \in [0,1], S \in {\cal S}.

A random variable f:X\rightarrow \mathbb{R} maps elements of a set to real numbers in such a way that f^{-1}(B) \in {\cal S} for any Borel subset of \mathbb{R}. This enables us to compute their expectation with respect to the density function g as

\displaystyle \int_X f \ dp_g = \int_X fg\ d\mu.

This is like the quantum formula \text{Tr}(\rho A) with our density operator \rho playing the role of g and operator A playing the role of random variable f. Hence a probability density function is the commutative probability analogue of a quantum state (density operator).

While Borel sets are the events from which we define simple functions and then random variables, in the non-commutative case we define operators in terms of projections (equivalently closed subspaces) of a Hilbert space {\cal H}. A projection operator P is self-adjoint, satisfies P^2 = P and has the discrete spectrum \{0,1\}. Hence they are analogous to 0-1 indicator random variables, the answers to yes/no events. For any unit vector v \in {\cal H} the expected value

\displaystyle \langle v, Pv \rangle = \langle v, P^2v \rangle = \langle Pv, Pv \rangle = \lVert Pv \rVert^2

is interpreted as the probability the observable corresponding to P will have value 1 when measured in the state corresponding to v. In particular this probability will be 1 if and only if v is in the invariant subspace of P. We define meet and join operations \vee, \wedge on these closed subspaces to create a Hilbert lattice ({\cal P}({\cal H}), \vee, \wedge, \perp):

  • A \wedge B = A \cap B
  • A \vee B = \text{closure of } A + B
  • A^{\perp} = \{u: \langle u,v \rangle = 0\ \forall v \in A\}

Borel sets form a \sigma-algebra in which the distributive law A \cap (B \cup C) = (A \cap B) \cup (A \cap C) holds for any elements of {\cal S}. However in the Hilbert lattice the corresponding rule A \wedge (B \vee C) = (A \wedge B) \vee (A \wedge C) (where A, B, C are projection operators) only holds some of the time (see here for an example). This failure of the distributive law is equivalent to the general non-commutativity of projections.

A quantum probability measure \phi:{\cal P} \rightarrow [0,1] can be defined by combining projections in a \sigma-additive way, namely \phi(0) = 0, \phi(I) = 1 and \phi(\vee_i P_i) = \sum_i \phi(P_i) where P_i are mutually orthogonal projections (P_i \leq P_j^{\perp}, i \neq j). Gleason’s theorem says that for Hilbert space dimension at least 3 a state is uniquely determined by the values it takes on the orthogonal projections – a quantum probability measure can be extended from projections to bounded operators to obtain \phi(A) = \text{Tr}(\rho_{\phi} A), similar to how characteristic functions are extended to integrable functions. Hence this is a key result for non-commutative integration (note: the continuity conditions defining \Phi in 1-3 above are stronger). We choose von Neumann algebras over C*-algebras since the former contain all spectral projections of their self-adjoint elements while the latter may not [ref].

So far we have seen that expected values of observables A are derived via the formula \text{Tr}(\rho A). To derive the distribution itself, we make of the spectral theorem and for self-adjoint operators with continuous spectrum this requires projection valued measures. A self-adjoint operator A has a corresponding function E_A:{\cal S} \rightarrow {\cal P}({\cal H}) mapping Borel sets to projections so that E_A(S) represents the event that the outcome of measuring observable A is in the set S: we require that E_A(X) = I and S \mapsto \langle u,E_A(S)v \rangle is a complex additive function (measure) for all u, v \in {\cal H}. We use E_A(\lambda) as shorthand for E_A(\{x:x\leq \lambda\}). Similar to the way a finite dimensional self-adjoint matrix M may be eigen-decomposed in terms of its eigenvalues \lambda_i and normalised eigenvectors u_i as

\begin{aligned} M &= \sum_i \lambda_i u_i u_i^T \\ &= \sum_i \lambda_i P_i \quad \text{(where }P_i := u_i u_i^T \text{ is a projection)}\\ &= \sum_i \lambda_i (E_i - E_{i-1}), \quad \text{(where } E_i := \sum{k \leq i} P_k\text{ ),} \end{aligned}

the spectral theorem for more general self-adjoint operators allows us to write

A = \int_{\sigma(A)} \lambda dE_A(\lambda)

which means that for every u, v \in {\cal H},

\langle u, Av \rangle = \int_{\sigma(A)} \lambda d\langle u,E_A v \rangle.

Here, the integrals are over the spectrum of A. Through this formula we can work with functions of operators and in particular the distribution of the random variable X corresponding to operator A in state \rho will be

\text{Pr}(X \leq x) = E\left[ 1_{\{X \leq x\} }\right] = \text{Tr} \left( \rho\int_{-\infty}^x dE_A(\lambda) \right) = \text{Tr} \left( \rho E_A(x) \right).

The similarities we have seen here between classical probability and quantum mechanics are summarised in the table below, largely taken from [2] which greatly aided my understanding. Note how the pairing between trace class and bounded operators is analogous to the duality of L^1 and L^{\infty} functions.

Classical Probability
Quantum Mechanics
(non-commutative probability)
(X,{\cal S}, \mu) – measure space ({\cal H}, {\cal P}({\cal H}), \text{Tr}) – Hilbert space model of QM
X – set {\cal H} – Hilbert space
{\cal S} – Boolean algebra of Borel subsets of X called events {\cal P}({\cal H})orthomodular lattice of projections (equivalently closed subspaces) of {\cal H}
disjoint events orthogonal projections
\mu:{\cal S} \rightarrow {\mathbb R}^{+} \cup \infty\sigma-additive positive measure \text{Tr} – functional
g \in L^1(X,\mu), g \geq 0, \int g \ d\mu = 1 – integrable functions (probability density functions) \rho \in {\cal T}({\cal H}), \rho \geq 0, \text{Tr}(\rho) = 1 – trace class operators (density operators)
p_g(S) = \int \chi_S g\ d\mu \in [0,1], S \in {\cal S}probability measure mapping Borel sets to numbers in [0,1] in a sigma-additive way \phi(S) = \text{Tr}(\rho_{\phi } S) \in [0,1], \rho_{\phi } \in {\cal T}({\cal H}), S \in {\cal P}({\cal H})quantum state mapping projections to numbers in [0,1] in a sigma-additive way
f \in L^{\infty}(X,\mu) – essentially bounded measurable functions (bounded random variables) A \in {\cal B}({\cal H}) – von Neumann algebra of bounded operators (bounded observables)
\int fg\ d\mu, g \in L^1(X,\mu) – expectation value of f \in L^{\infty}(X,\mu) with respect to p_g

\text{Tr}(\rho A), \rho \in {\cal T}({\cal H}) – expectation value of A \in {\cal B}({\cal H}) in state \rho

In summary, the fact that measurements don’t always commute lead us to consider non-commutative operator algebras. This leads us to the Hilbert space representation of quantum mechanics where a quantum state is a trace-one density operator and an observable is a bounded linear operator. We also saw that projections can be viewed as 0-1 events. The spectral theorem is used to decompose operators into a sum or integral of projections.

The richer mathematical setting for quantum mechanics allows us to model non-classical phenomena such as quantum interference and entanglement. We have not mentioned the time evolution of states, but in short, state vectors evolve unitarily according to the Schrödinger equation, generated by an operator known as the Hamiltonian.

References and Further Reading

[1] Hall, B.C., Quantum Theory for Mathematicians, Springer, Graduate Texts in Mathematics #267, June 2013 (relevant section)

[2] Redei, M., Von Neumann’s work on Hilbert space quantum mechanics

[3] Blackadar, B., Operator Algebras: Theory of C*-Algebras and von Neumann Algebras

[4] Wilce, Alexander, “Quantum Logic and Probability Theory“, The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), Edward N. Zalta (ed.).

[5] Wikipedia – Quantum logic

[6] – Lattice of Projections

[7] – Spectral Measure

[8] quantum mechanics – Intuitive meaning of Hilbert Space formalism – Physics Stack Exchange

[9] This answer to: mathematical physics – Quantum mechanics in a metric space rather than in a vector space, possible? – Physics Stack Exchange

[10] functional analysis – Resolution of the identity (basic questions) – Mathematics Stack Exchange

March 19, 2017

Two similar geometry problems based on perpendiculars to cevians

Filed under: mathematics — ckrao @ 7:18 am

In this post I wanted to show a couple of similar problems that can be proved using some ideas from projective geometry.

The first problem I found via the Romantics of Geometry Facebook group: let M be the point of tangency of the incircle of \triangle ABC with BC and let E be the foot of the perpendicular from the incentre X of the \triangle ABC to AM. Then show EM bisects \angle BEC.



The second problem is motivated by the above and problem 2 of the 2008 USAMO: this time let AM be a symmedian of ABC and E be the foot of the perpendicular from the circumcentre X of \triangle ABC to AM. Then show that EM bisects \angle BEC.


Here is a solution to the first problem inspired bythat of Vaggelis Stamatiadis. Let the line through the other two points of tangency P, Q of the incircle with ABC intersect line BC at the point N as shown below. Note that since AP and AQ are tangents to the circle, line NPQ is the polar of A with respect to the incircle.


Since N is on the polar of A, by La Hire’s theorem, A is on the polar of N. The polar of N also passes through M (as NM is a tangent to the circle at M). We conclude that the polar of N is the line through A and M.

Next, let MN intersect PQ at R. By theorem 5(a) at this link, the points (N, R, P, Q) form a harmonic range. Since the cross ratio of collinear points does not change under central projection,  considering the projection from A, (N,M,B,C) also form a harmonic range. (Alternatively, this follows from the theorems of Ceva and Menelaus using the Cevians intersecting at the Gergonne point and transveral NPQ). Also, NE \perp EM as both NI and IE are perpendicular to polar AM of N.

Considering a central projection from E of line NMBC to a line N', M, P', C' parallel to NE through M, we see that (N', M, P', C') form a harmonic range. Since N' is a point at infinity, this implies M is the midpoint of B'C' and so triangles B'EM and C'EM are congruent (equality of two pairs of sides and included angle is 90^{\circ}). Hence EM bisects \angle BEC as was to be shown.


For the second problem, we use the following characterisation of a symmedian: AM extended concurs with the lines of tangency of the circumcircle at B and C. (For three proofs of this see here.)


Define N as the intersection of XE with BC and D as the intersection of AM with the tangents at B, C. Note that line NBMC is the polar of D with respect to the circumcircle. By La Hire’s theorem, D must be on the polar of N. This polar is perpendicular to NX (the line joining N to the centre of the circle) and as ED \perp EX by construction of E, it follows that line AEMD is the polar of N. Again by theorem 5(a) in reference (2), (N, M, B, C) form a harmonic range. Following the same argument as the previous proof, this together with NE \perp EM imply EM bisects \angle BEC as required.

By similar arguments, one can prove the following, left to the interested reader. If X is the A-excentre of \triangle ABC, M the ex-circle’s point of tangency of BC, and E the foot of the perpendicular from X to line AM, then EM bisects \angle BEC.



(1) Alexander Bogomolny, Poles and Polars from Interactive Mathematics Miscellany and Puzzles, Accessed 19 March 2017

(2) Poles and Polars – Another Useful Tool! | The Problem Solver’s Paradise

(3) Yufei Zhao, Lemmas in Euclidean Geometry

December 19, 2016

Some special functions and their applications

Filed under: mathematics — ckrao @ 9:55 am

Here are some notes on special functions and where they may arise. We consider functions in applied mathematics beyond field (four arithmetic operations), composition and inverse operations applied to the power and exponential functions.

1. Bessel and related functions

Bessel functions of the first (J_{\alpha}(x)) and second (Y_{\alpha}(x)) kind of order \alpha satisfy:

\displaystyle x^2 \frac{d^2 y}{dx^2} + x \frac{dy}{dx} + (x^2 - \alpha^2)y = 0.

Solutions for integer \alpha arise in solving Laplace’s equation in cylindrical coordinates while solutions for half-integer \alpha arise in solving the Helmholtz equation in spherical coordinates. Hence they come about in wave propagation, heat diffusion and electrostatic potential problems. The functions oscillate roughly periodically with amplitude decaying proportional to 1/\sqrt{x}. Note that Y_{\alpha}(x) is the second linearly independent solution when \alpha is an integer (for integer n, J_{-n}(x) = (-1)^n J_n(x)). Also, for integer n, J_n has the generating function

\displaystyle  \sum_{n=-\infty}^\infty J_n(x) t^n = e^{(\frac{x}{2})(t-1/t)},

the integral representations

\displaystyle J_n(x) = \frac{1}{\pi} \int_0^\pi \cos (n \tau - x \sin(\tau)) \,d\tau = \frac{1}{2 \pi} \int_{-\pi}^\pi e^{i(n \tau - x \sin(\tau))} \,d\tau

and satisfies the orthogonality relation

\displaystyle \int_0^1 x J_\alpha(x u_{\alpha,m}) J_\alpha(x u_{\alpha,n}) \,dx = \frac{\delta_{m,n}}{2} [J_{\alpha+1}(u_{\alpha,m})]^2 = \frac{\delta_{m,n}}{2} [J_{\alpha}'(u_{\alpha,m})]^2,

where \alpha > -1, \delta_{m,n} Kronecker delta, and u_{\alpha, m} is the m-th zero of J_{\alpha}(x).

Modified Bessel functions of the first (I_{\alpha}(x)) and second (K_{\alpha}(x)) kind of order \alpha satisfy:

\displaystyle x^2 \frac{d^2 y}{dx^2} + x \frac{dy}{dx} - (x^2 + \alpha^2)y = 0

(replacing x with ix in the previous equation).

The four functions may be expressed as follows.

\displaystyle J_{\alpha}(x) = \sum_{m=0}^\infty \frac{(-1)^m}{m! \, \Gamma(m+\alpha+1)} {\left(\frac{x}{2}\right)}^{2m+\alpha}

\displaystyle I_\alpha(x) = \sum_{m=0}^\infty \frac{1}{m! \, \Gamma(m+\alpha+1)} {\left(\frac{x}{2}\right)}^{2m+\alpha}

\displaystyle Y_\alpha(x) = \frac{J_\alpha(x) \cos(\alpha\pi) - J_{-\alpha}(x)}{\sin(\alpha\pi)}

\displaystyle K_\alpha(x) = \frac{\pi}{2} \frac{I_{-\alpha} (x) - I_\alpha (x)}{\sin (\alpha \pi)}

(In the last formula we need to take a limit when \alpha is an integer.)

Note that K and Y are singular at zero.

The Hankel functions H_\alpha^{(1)}(x) = J_\alpha(x)+iY_\alpha(x) and H_\alpha^{(2)}(x) = J_\alpha(x)-iY_\alpha(x) are also known as Bessel functions of the third kind.

The functions J_\alphaY_\alpha, H_\alpha^{(1)}, and H_\alpha^{(2)} all satisfy the recurrence relations (using Z in place of any of these four functions)

\displaystyle \frac{2\alpha}{x} Z_\alpha(x) = Z_{\alpha-1}(x) + Z_{\alpha+1}(x),
\displaystyle 2\frac{dZ_\alpha}{dx} = Z_{\alpha-1}(x) - Z_{\alpha+1}(x).

Bessel functions of higher orders/derivatives can be calculated from lower ones via:

\displaystyle \left( \frac{1}{x} \frac{d}{dx} \right)^m \left[ x^\alpha Z_{\alpha} (x) \right] = x^{\alpha - m} Z_{\alpha - m} (x),
\displaystyle \left( \frac{1}{x} \frac{d}{dx} \right)^m \left[ \frac{Z_\alpha (x)}{x^\alpha} \right] = (-1)^m \frac{Z_{\alpha + m} (x)}{x^{\alpha + m}}.

In particular, note that -J_1(x) is the derivative of J_0(x).

The Airy functions of the first (Ai(x)) and second (Bi(x)) kind satisfy

\displaystyle \frac{d^2y}{dx^2} - xy = 0.

This arises as a solution to Schrödinger’s equation for a particle in a triangular potential well and also describes interference and refraction patterns.

2. Orthogonal polynomials

Hermite polynomials (the probabilists’ defintion) can be defined by:

\displaystyle \mathit{He}_n(x)=(-1)^n e^{\frac{x^2}{2}}\frac{d^n}{dx^n}e^{-\frac{x^2}{2}}=\left (x-\frac{d}{dx} \right )^n \cdot 1,

and are orthogonal with respect to weighting function w(x) = e^{-x^2} on (-\infty, \infty).

They satisfy the differential equation

\displaystyle \left(e^{-\frac{x^2}{2}}u'\right)' + \lambda e^{-\frac{1}{2}x^2}u = 0

(where \lambda is forced to be an integer if we insist u be polynomially bounded at \infty)

and the recurrence relation

\displaystyle {\mathit{He}}_{n+1}(x)=x{\mathit{He}}_n(x)-{\mathit{He}}_n'(x).

The first few such polynomials are 1, x, x^2-1, x^3-3x, \ldots. The Physicists’ Hermite polynomials H_n(x) are related by H_n(x)=2^{\tfrac{n}{2}}{\mathit{He}}_n(\sqrt{2} \,x) and arise for example as the eigenstates of the quantum harmonic oscillator.

Laguerre polynomials are defined by

\displaystyle L_n(x)=\frac{e^x}{n!}\frac{d^n}{dx^n}\left(e^{-x} x^n\right) =\frac{1}{n!} \left( \frac{d}{dx} -1 \right) ^n x^n = \sum_{k=0}^n \binom{n}{k}\frac{(-1)^k}{k!} x^k,

and are orthogonal with respect to e^{-x} on (0,\infty).

They satisfy the differential equation

\displaystyle  xy'' + (1 - x)y' + ny = 0,

recurrence relation

\displaystyle L_{k + 1}(x) = \frac{(2k + 1 - x)L_k(x) - k L_{k - 1}(x)}{k + 1},

and have generating function

\displaystyle \sum_n^\infty  t^n L_n(x)=  \frac{1}{1-t} e^{-\frac{tx}{1-t}}.

The first few values are 1, 1-x, (x^2-4x+2)/2. Note also that L_{-n}(x)=e^xL_{n-1}(-x).

The functions come up as the radial part of solution to Schrödinger’s equation for a one-electron atom.

Legendre polynomials can be defined by

\displaystyle P_n(x) = {1 \over 2^n n!} {d^n \over dx^n } \left[ (x^2 -1)^n \right]

and are orthogonal with respect to the L^2 norm on (-1,1).

They satisfy the differential equation

\displaystyle {d \over dx} \left[ (1-x^2) {d \over dx} P_n(x) \right] + n(n+1)P_n(x) = 0,

recurrence relation

and have generating function

\sum_{n=0}^\infty P_n(x) t^n = \displaystyle \frac{1}{\sqrt{1-2xt+t^2}}.

The first few values are 1, x, (3x^2-1)/2, (5x^3-3x)/2.

They arise in the expansion of the Newtonian potential 1/|x-x'| (multipole expansions) and Laplace’s equation where there is axial symmetry (spherical harmonics are expressed in terms of these).

Chebyshev polynomials of the 1st kind T_n(x) can be defined by

T_n(x) =\begin{cases} \cos(n\arccos(x)) & \ |x| \le 1 \\ \frac12 \left[ \left (x-\sqrt{x^2-1} \right )^n + \left (x+\sqrt{x^2-1} \right )^n \right] & \ |x| \ge 1 \\ \end{cases}

and are orthogonal with respect to weighting function w(x) = 1/\sqrt{1-x^2} in (-1,1).

They satisfy the differential equation

\displaystyle (1-x^2)\,y'' - x\,y' + n^2\,y = 0,

the relations

\displaystyle T_{n+1}(x) = 2xT_n(x) - T_{n-1}(x)

\displaystyle (1 - x^2)T_n'(x) = -nx T_n(x) + n T_{n-1}(x)

and have generating function

\displaystyle \sum_{n=0}^{\infty}T_n(x) t^n = \frac{1-tx}{1-2tx+t^2}.

The first few values are 1, x, 2x^2-1, 4x^3-3x, \ldots. These polynomials arise in approximation theory, namely their roots are used as nodes in piecewise polynomial interpolation. The function f(x) = \frac1{2^{n-1}}T_n(x) is the polynomial of leading coefficient 1 and degree n where the maximal absolute value on (-1,1) is minimal.

Chebyshev polynomials of the 2nd kind U_n(x) are defined by

\displaystyle  U_n(x)  = \frac{\left (x+\sqrt{x^2-1} \right )^{n+1} - \left (x-\sqrt{x^2-1} \right )^{n+1}}{2\sqrt{x^2-1}}

and are orthogonal with respect to weighting function w(x) = \sqrt{1-x^2} in (-1,1).

They satisfy the differential equation

\displaystyle  (1-x^2)\,y'' - 3x\,y' + n(n+2)\,y = 0,

the recurrence relation

\displaystyle U_{n+1}(x) = 2xU_n(x) - U_{n-1}(x)

and have generating function

\displaystyle \sum_{n=0}^{\infty}U_n(x) t^n = \frac{1}{1-2 t x+t^2}.

The first few values are 1, 2x, 4x^2-1, 8x^3-4x, \ldots. (There are also less well known Chebyshev  polynomials of the third and fourth kind.)

Bessel polynomials y_n(x) may be defined from Bessel functions via

\displaystyle y_n(x)=\sqrt{\frac{2}{\pi x}}\,e^{1/x}K_{n+\frac 1 2}(1/x)  = \sum_{k=0}^n\frac{(n+k)!}{(n-k)!k!}\,\left(\frac{x}{2}\right)^k.

They satisfies the differential equation

\displaystyle x^2\frac{d^2y_n(x)}{dx^2}+2(x\!+\!1)\frac{dy_n(x)}{dx}-n(n+1)y_n(x)=0.

The first few values are 1, x+1, 3x^2+3x+1,\ldots.

3. Integrals

The error function has the form

\displaystyle \rm{erf}(x) = \frac{2}{\sqrt\pi}\int_0^x e^{-t^2}\,\mathrm dt.

This can be interpreted as the probability a normally distributed random variable with zero mean and variance 1/2 is in the interval (-x,x).

The cdf of the normal distribution $\Phi(x)$ is related to this via \Phi(x) = (1 + {\rm erf}(x/\sqrt{2})/2. Hence the tail probability of the standard normal distribution Q(x) is Q(x) = (1 - {\rm erf}(x/\sqrt{2}))/2.

Fresnel integrals are defined by

\displaystyle S(x) =\int_0^x \sin(t^2)\,\mathrm{d}t=\sum_{n=0}^{\infty}(-1)^n\frac{x^{4n+3}}{(2n+1)!(4n+3)}
\displaystyle C(x) =\int_0^x \cos(t^2)\,\mathrm{d}t=\sum_{n=0}^{\infty}(-1)^n\frac{x^{4n+1}}{(2n)!(4n+1)}

They have applications in optics.

The exponential integral {\rm Ei}(x) (used in heat transfer applications) is defined by

\displaystyle {\rm Ei}(x)=-\int_{-x}^{\infty}\frac{e^{-t}}t\,dt.

It is related to the logarithmic integral

\displaystyle {\rm li} (x) =   \int_0^x \frac{dt}{\ln t}

by \mathrm{li}(x) = \mathrm{Ei}(\ln x) (for real x).

The incomplete elliptic integral of the first, second and third kinds are defined by

\displaystyle F(\varphi,k) = \int_0^\varphi \frac {d\theta}{\sqrt{1 - k^2 \sin^2 \theta}}

\displaystyle E(\varphi,k) =  \int_0^\varphi \sqrt{1-k^2 \sin^2\theta}\, d\theta

 \displaystyle \Pi(n ; \varphi \setminus \alpha) = \int_0^\varphi  \frac{1}{1-n\sin^2 \theta} \frac {d\theta}{\sqrt{1-(\sin\theta\sin \alpha)^2}}

Setting \varphi = \pi/2 gives the complete elliptic integrals.

Any integral of the form \int_{c}^{x} R \left(t, \sqrt{P(t)} \right) \, dt, where c is a constant, R is a rational function of its arguments and P(t) is a polynomial of 3rd or 4th degree with no repeated roots, may be expressed in terms of the elliptic integrals. The circumference of an ellipse of semi-major axis a, semi-minor axis b and eccentricity e = \sqrt{1-b^2/a^2} is given by 4aE(e), where E(k) is the complete integral of the second kind.

(Some elliptic functions are related to inverse elliptic integral, hence their name.)

The (upper) incomplete Gamma function is defined by

\displaystyle \Gamma(s,x) = \int_x^{\infty} t^{s-1}\,e^{-t}\,{\rm d}t.

It satisfies the recurrence relation \Gamma(s+1,x)= s\Gamma(s,x) + x^{s} e^{-x}. Setting s= 0 gives the Gamma function which interpolates the factorial function.

The digamma function is the logarithmic derivative of the gamma function:

\displaystyle \psi(x)=\frac{d}{dx}\ln\Big(\Gamma(x)\Big)=\frac{\Gamma'(x)}{\Gamma(x)}.

Due the relation \psi(x+1) = \psi(x) + 1/x, this function appears in the regularisation of divergent integrals, e.g.

\sum_{n=0}^{\infty} \frac{1}{n+a}= - \psi (a).

The incomplete Beta function is defined by

\displaystyle B(x;\,a,b) = \int_0^x t^{a-1}\,(1-t)^{b-1}\,\mathrm{d}t.

When setting x=1 this becomes the Beta function which is related to the gamma function via

\displaystyle B(x,y)=\frac{\Gamma(x)\,\Gamma(y)}{\Gamma(x+y)}.

This can be extended to the multivariate Beta function, used in defining the Dirichlet function.

\displaystyle B(\alpha_1,\ldots,\alpha_K) = \frac{\Gamma(\alpha_1) \cdots \Gamma(\alpha_K)}{\Gamma(\alpha_1 + \ldots + \alpha_K)}.

The polylogarithm, appearing as integrals of the Fermi–Dirac and Bose–Einstein distributions, is defined by

\displaystyle {\rm Li}_s(z) = \sum_{k=1}^\infty \frac{z^k}{k^s} = z + \frac{z^2}{2^s} + \frac{z^3}{3^s} + \cdots

Note the special case {\rm Li}_1(z) = -\ln (1-z) and the case s=2 is known as the dilogarithm. We also have the recursive formula

\displaystyle {\rm Li}_{s+1}(z) = \int_0^z \frac {{\rm Li}_s(t)}{t}\,\mathrm{d}t.

4. Generalised Hypergeometric functions

All the above functions can be written in terms of generalised hypergeometric functions.

\displaystyle {}_pF_q(a_1,\ldots,a_p;b_1,\ldots,b_q;z) = \sum_{n=0}^\infty \frac{(a_1)_n\dots(a_p)_n}{(b_1)_n\dots(b_q)_n} \, \frac {z^n} {n!}

where (a)_n = \Gamma(a+n)/\Gamma(a) = a(a+1)(a+2)...(a+n-1) for n > 0 or (a)_0 = 1.

The special case p=q=1 is called a confluent hypergeometric function of the first kind, also written M(a;b;z).

This satisfies the differential equation (Kummer’s equation)

\displaystyle \left (z\frac{d}{dz}+a \right )w = \left (z\frac{d}{dz}+b \right )\frac{dw}{dz}.

The Bessel, Hankel, Airy, Laguerre, error, exponential and logarithmic integral functions can be expressed in terms of this.

The case p=2, q=1 is sometimes called Gauss’s hypergeometric functions, or simply hypergeometric functions. This satisfies the differential equation

\displaystyle \left (z\frac{d}{dz}+a \right ) \left (z\frac{d}{dz}+b \right )w =\left  (z\frac{d}{dz}+c \right )\frac{dw}{dz}.

The Legendre, Hermite and Chebyshev, Beta, Gamma functions can be expressed in terms of this.

Further reading

The Wolfram Functions Site

Wikipedia: List of mathematical functions

Wikipedia: List of special functions and eponyms

Wikipedia: List of q-analogs

Wikipedia Category: Orthogonal polynomials

Weisstein, Eric W. “Laplace’s Equation.” From MathWorld–A Wolfram Web Resource.

June 26, 2016

2016 has many factors

Filed under: mathematics — ckrao @ 11:16 am

The number 2016 has at least as many factors (36) as any positive integer below it except 1680 = 2^4\times 3\times 5\times 7 (which has 40 factors). The next time a year will have more factors is 2160 = 2^4\times 3^3\times 5, also with 40 factors.

Here are the numbers below 2160 also with 36 factors:

  • 1260 = 2^2 \times 3^2 \times 5 \times 7
  • 1440 = 2^5 \times 3^2 \times 5
  • 1800 = 2^3 \times 3^2 \times 5^2
  • 1980 = 2^2 \times 3^2 \times 5 \times 11
  • 2016 = 2^5 \times 3^2 \times 7
  • 2100 = 2^2 \times 3 \times 5^2 \times 7

The first integer with more than 40 factors is $2520 = 2^3 \times 3^2 \times 5 \times 7$ (48 factors).


[1] N. J. A. Sloane and Daniel Forgues, Table of n, a(n) for n = 1..100000 (first 10000 terms from N. J. A. Sloane), A000005 – OEIS.

[2] Highly composite number – Wikipedia, the free encyclopedia

March 26, 2016

Applying AM-GM in the denominator after flipping the sign

Filed under: mathematics — ckrao @ 8:44 pm

There are times when solving inequalities that one has a sum of fractions in which applying the AM-GM inequality to each denominator results in the wrong sign for the resulting expression.

For example (from [1], p18), if we wish to show that for real numbers x_1, x_2, \ldots, x_n with sum n that

\displaystyle \sum_{i = 1}^n \frac{1}{x_i^2 + 1}\geq \frac{n}{2},

we may write x_i^2 + 1 \geq 2x_i (equivalent to (x_i-1)^2 \geq 0), but this implies \frac{1}{x_i^2 + 1} \leq \frac{1}{2x_i} and so the sign goes the wrong way.

A way around this is to write

\begin{aligned}  \frac{1}{x_i^2 + 1} &= 1 - \frac{x_i^2}{x_i^2 + 1}\\  &\geq 1 - \frac{x_i^2}{2x_i}\\  &= 1 - \frac{x_i}{2}.  \end{aligned}

Summing this over i then gives \sum_{i=1}^n \frac{1}{x_i^2 + 1} \geq n - \sum_{i=1}^n (x_i/2) = n/2 as desired.

Here are a few more examples demonstrating this technique.

2. (p9 of [2]) If a,b,c are positive real numbers with a + b + c = 3, then

\dfrac{a}{1 +b^2} + \dfrac{b}{1 +c^2} + \dfrac{c}{1 +a^2} \geq \dfrac{3}{2}.

To prove this we write

\begin{aligned}  \frac{a}{1 + b^2} &= a\left(1 - \frac{b^2}{1 + b^2}\right)\\  &\geq a\left(1 - \frac{b}{2}\right) \quad \text{(using the same argument as before)}\\  &=a - \frac{ab}{2}.  \end{aligned}

Next we have 3(ab + bc + ca) \leq (a + b + c)^2 = 9 as this is equivalent to (a-b)^2 + (b-c)^2 + (c-a)^2 \geq 0. This means ab + bc + ca \leq 3. Putting everything together,

\begin{aligned}  \frac{a}{1 + b^2} + \frac{b}{1 + c^2} + \frac{c}{1 + a^2}&\geq \left( a - \frac{ab}{2} \right) + \left( b - \frac{bc}{2} \right) + \left( c - \frac{ca}{2} \right)\\  &= (a + b + c) - (ab + bc + ca)/2\\  &\geq 3 - 3/2\\  &=\frac{3}{2},  \end{aligned}

as required.

3. (based on p8 of [2]) If x_i > 0 for i= 1, 2, \ldots, n and \sum_{i = 1}^n x_i^2 = n then

\displaystyle \sum_{i=1}^n \frac{1}{x_i^3 + 2} \geq \frac{n}{3}.

By the AM-GM inequality, x_i^3 + 2 = x_i^3 + 1 + 1 \geq 3x_i, so

\begin{aligned}  \frac{1}{x_i^3 + 2} &= \frac{1}{2}\left( 1 - \frac{x_i^3}{x_i^3 + 2} \right)\\  &\geq \frac{1}{2}\left( 1 - \frac{x_i^3}{3x_i} \right)\\  &= \frac{1}{2}\left( 1 - \frac{x_i^2}{3} \right).  \end{aligned}

Summing this over i gives

\begin{aligned}  \sum_{i=1}^n \frac{1}{x_i^3 + 2} &\geq \frac{1}{2} \sum_{i=1}^n \left( 1 - \frac{x_i^2}{3} \right)\\  &= \frac{1}{2}\left( n - \frac{n}{3} \right)\\  &= \frac{n}{3}.  \end{aligned}

4. (from [3]) If x, y, z are positive, then

\dfrac {x ^ 3}{x ^ 2 + y ^ 2} + \dfrac {y ^ 3}{y ^ 2 + z ^ 2} + \dfrac {z ^ 3}{z ^ 2 + x ^ 2 } \geq \dfrac {x + y + z}{2}.

Once again, focusing on the denominator,

\begin{aligned}  \dfrac {x ^ 3}{x ^ 2 + y ^ 2} &= x\left(1 - \dfrac {y ^ 2} {x ^ 2 + y ^ 2} \right)\\  &\geq x \left(1 -\dfrac{xy^2}{2xy} \right)\\  &= x-\dfrac{y}{2}.  \end{aligned}


\begin{aligned}  \dfrac {x ^ 3}{x ^ 2 + y ^ 2} + \dfrac {y ^ 3}{y ^ 2 + z ^ 2} + \dfrac {z ^ 3}{z ^ 2 + x ^ 2 } &\geq x-\dfrac{y}{2} + y-\dfrac{z}{2} + z-\dfrac{x}{2}\\  &= \dfrac {x + y + z}{2},  \end{aligned}

as desired.

5. (from the 1991 Asian Pacific Maths Olympiad, see [4] for other solutions) Let a_1, a_2, \ldots, a_n, b_1, b_2, \ldots, b_n be positive numbers with \sum_{i = 1}^n a_i = \sum_{i = 1}^n b_i. Then

\displaystyle\sum_{i=1}^n\frac{a_i^2}{a_i + b_i} \geq \frac{1}{2}\sum_{i=1}^n a_i.

Here we write

\begin{aligned}  \sum_{i=1}^n\frac{a_i^2}{a_i + b_i} &= \sum_{i=1}^n a_i \left(1 - \frac{b_i}{a_i + b_i} \right)\\  &\geq \sum_{i=1}^n a_i \left(1 - \frac{b_i}{2\sqrt{a_i b_i}} \right) \\  &= \frac{1}{2} \sum_{i=1}^n \left( 2a_i - \sqrt{a_i b_i} \right) \\  &= \frac{1}{4} \sum_{i=1}^n \left( 4a_i - 2\sqrt{a_i b_i} \right)\\  &= \frac{1}{4} \sum_{i=1}^n \left( 2a_i +a_i - 2\sqrt{a_i b_i} + b_i \right) \quad \text{(as } \sum_{i=1}^n a_i = \sum_{i=1}^n b_i\text{)}\\  &= \frac{1}{4} \sum_{i=1}^n \left( 2a_i + \left(\sqrt{a_i} - \sqrt{b_i}\right)^2 \right)\\  &\geq \frac{1}{4} \sum_{i=1}^n 2a_i\\  &= \frac{1}{2} \sum_{i=1}^n a_i,  \end{aligned}

as required.


[1] Zdravko Cvetkovski, Inequalities: Theorems, Techniques and Selected Problems, Springer, 2012.

[2] Wang and Kadaveru, Advanced Topics in Inequalities, available from

[3] Cauchy Reverse Technique:

[4] algebra precalculus – Prove that \frac{a_1^2}{a_1+b_1}+\cdots+\frac{a_n^2}{a_n+b_n} \geq \frac{1}{2}(a_1+\cdots+a_n). – Mathematics StackExchange

February 27, 2016

Cutting a triangle in half

Filed under: mathematics — ckrao @ 9:40 pm

Here is a cute triangle result that I’m surprised I had not known previously. If we are given a point on one of the sides of a triangle, how do we find a line through the triangle that cuts its area in half?

Clearly if that point is either a midpoint or one of the vertices, the answer is a median of the triangle. A median cuts a triangle in half since the two pieces have the same length side and equal height.


So what if the point is not a midpoint or a vertex? Referring to the diagram below, if P is our desired point closer to A than B, the end point Q of the area-bisecting segment would need to be on side BC so that area(BPQ) = area(ABC)/2.


In other words, we require area(BPQ) = area(BDQ), or, subtracting the areas of triangle BDQ from both sides,

\displaystyle |DPQ| = |DCQ|.

Since these two triangles share the common base DQ, this tells us that we require them to have the same height. In other words, we require CP to be parallel to DQ. This tells us how to construct the point Q given P on AB:

  1. Construct the midpoint D of AB.
  2. Draw DQ parallel to AP.

See [1] for an animation of this construction.

In turns out that the set of all area-bisecting lines are tangent to three hyperbolas and enclose a deltoid of area (3/4)\ln(2) - 1/2 \approx 0.01986 times the original triangle. [2,3,4]


[1] Jaime Rangel-Mondragon, “Bisecting a Triangle” from the Wolfram Demonstrations Project Published: July 10, 2013

[2] Ismail Hammoudeh, “Triangle Area Bisectors” from the Wolfram Demonstrations Project

[4] Henry Bottomley, Area bisectors of a triangle, January 2002

January 30, 2016

Patterns early in the digits of pi

Filed under: mathematics — ckrao @ 8:59 pm

Recently when taking a look at the early decimal digits of \pi I made the following observations:


  • The first run of seven distinct digits (8327950, shown underlined) appears in the 26th to 32nd decimal place. Curiously the third such run (5923078, also underlined) in decimal places 61 to 67 contains the same seven digits. (There is also a run of seven distinct digits in places 51 to 57 with 5820974.)
  • Decimal digits 60 to 69 (shown in bold) are distinct (i.e. all digits are represented once in this streak). The same is true for digits 61 to 70 as both digits 60 and 70 are ‘4’.

Assume the digits of \pi are generated independently from a uniform distribution. Firstly, how often would we expect to see a run of 7 distinct digits? Places k to k+6 are distinct with probability

\displaystyle \frac{9}{10} \times \frac{8}{10} \times \frac{7}{10} \times \frac{6}{10} \times \frac{5}{10} \times \frac{4}{10} = \frac{9!}{3.10^6} = \frac{189}{3125}.

Hence we expect runs of 7 distinct digits to appear 3125/189 \approx 16.5 places apart. This includes the possibility of runs such as 12345678 which contain two runs of 7 distinct digits that are only 1 place apart.

How often would we expect to see the same 7 digits appearing in a run as we did in places 26-32 and 61-67? Furthermore let’s assume the two runs have no overlap, so we discount possibilities such as 12345678 which have a six-digit overlap. We expect a given sequence (e.g. 1234567, in that order) to appear 1/10^7 of the time. There are 7! = 5040 permutations of such a sequence, but of these 1 has overlap 6 (2345671), 2! has overlap 5 (3456712 or 3456721), 3! has overlap 4, …, 6! has overlap 1 with the original sequence. This leaves us with 5040 - (1 + 2 + 6 + 24 + 120 + 720) = 4167 possible choices of the next run to have the same 7 digits but non-overlapping (or to appear in precisely the same order – overlap 7). Hence we expect the same 7 digits to recur (no overlap with the original run) after approximately 10^7/4167 \approx 2400 places apart so what we saw in the first 100 places was remarkable.

Now let us turn to runs of all ten distinct digits. Repeating the argument above, such runs occur every 10^{10}/10! \approx 2756 places. According to [1] the next time we see ten distinct digits is in decimal places 5470 to 5479.

To answer the question of when we would expect to see the first occurrence of ten distinct digits, we adopt an argument from renewal-reward theory based on Sec 7.9.2 of [2] (there also exist approaches based on setting up recurrence relations, or martingale theory (modelling a fair casino), see [3]-[4]). Firstly we let T be the first time we get 10 consecutive distinct values – we wish to find E[T] where E denotes the expected value operator. Note that this will be more than the 2756 answer we obtained above since we make no assumption of starting with a run of ten distinct digits – there is no way T could be 1 for example, but we could have two runs of ten distinct digits that are 1 apart.

From a sequence of digits we first define a renewal process in which after we get 10 consecutive distinct values (at time T) we start over and wait for the next run of 10 consecutive distinct values without using any of the values  up to time T. Such a process will then have an expected length of cycle of E[T].

Next, suppose we earn a reward of $1 every time the last 10 digits of the sequence are distinct (so we would have obtained $1 at each of decimal places 69 and 70 in the \pi example). By an important result in renewal-reward theory, the long run average reward is equal to the expected reward in a cycle divided by the expected length of a cycle.

In a cycle we will obtain

  • $1 at the end
  • $1 at time 1 in the cycle with probability 1/10 (if that digit is the same as ten digits before it)
  • $1 at time 2 in the cycle with probability 2/100 (if the last two digits match those ten places before it)
  • $1 at time 9 in the cycle with probability 9!/10^9 (if the last nine digits match those ten places before it)

Hence the expected reward in a cycle is given by

\displaystyle 1 + \sum_{i=1}^9 \frac{i!}{10^i} = \sum_{i=0}^9 \frac{i!}{10^i}.

We have already seen that the long run average reward is 10!/10^{10} at each decimal place. Hence the expected length of a cycle E[T] (i.e. the expected number of digits before we expect the first run of ten consecutive digits) is given by

\frac{10^{10}}{10!}\sum_{i=0}^9 \frac{i!}{10^i} \approx 3118.

Hence it is pretty cool that we see it so early in the decimal digits of \pi. 🙂


[1] A258157 – Online Encyclopedia of Integer Sequences

[2] S. Ross, Introduction to Probability Models, Academic Press, 2014.

[3] Combinatorics Problem on Expected Value – Problem Solving: Dice Rolls – Daniel Wang | Brilliant

[4] A Collection of Dice

Next Page »

Blog at

%d bloggers like this: