Chaitanya's Random Pages

April 1, 2018

A collection of binary grid counting problems

Filed under: mathematics — ckrao @ 3:52 am

The number of ways of colouring an m by n grid one of two colours without restriction is 2^{mn}. The following examples show what happens when varying restrictions are placed on the colouring.

Example 1: The number of ways of colouring an m by n grid black or white so that there is an even number of 1s in each row and column is

\displaystyle 2^{(m-1)(n-1)}.

Proof: The first m-1 rows and n-1 columns may be coloured arbitarily. This then uniquely determines how the bottom row and rightmost column are coloured (restoring even parity). The bottom right square will be black if and only if the number of black squares in the remainder of the grid is odd, hence this is also uniquely determined by the first m-1 rows and n-1 columns. Details are also given here.

Example 2: The number of ways of colouring an m by n grid black or white so that every 2 by 2 square has an odd number (1 or 3) of black squares is

\displaystyle 2^{m+n-1}.

Proof: First colour the first row and first column arbitarily (there are m+n-1 such squares each with 2 possibilities). This uniquely determines how the rest of the grid must be coloured by considering the colouring of adjacent squares above and to the left.

By the same argument, the above is the same as the number of colouring an m by n grid black or white so that every 2 by 2 square has an even number (0, 2 or 4) of black squares.

Example 3: The number of ways of colouring an m by n grid black or white so that every 2 by 2 square has two of each type is

\displaystyle 2^m + 2^n - 2.

Proof: If there are two adjacent squares of the same colour with one above the other, the remaining squares of the corresponding two rows are uniquely determined as being the same alternating between black and white. The remainder of the grid is then determined by the colouring of first column (2^m - 2 possibilities where we omit the two cases of alternating colours down the first column). Such a grid cannot have two horizontally adjacent squares of the same colour. By a similar argument a colouring that has two adjacent colours with one left of the other can be done in 2^n-2 ways. Finally we have the two additional configurations where there are no adjacent squares of the same colour, which is uniquely determined by the colour of the top left square. Hence in total we have (2^m-2) + (2^n-2) + 2 = 2^m + 2^n - 2 possible colourings.

This question for m = n = 8 was in the 2017 Australian Mathematics Competition and the general solution is also discussed here.

Example 4: The number of ways of colouring an m by n grid black or white so that each row and each column contain at least one black square is (OEIS A183109)

\displaystyle \sum_{j=0}^m (-1)^j \binom{m}{j} (2^{m-j}-1)^n.

Proof: First we count the number of colourings where a fixed subset of j columns is entirely white and each row has at least one black square. The remaining m-j columns and n rows can be coloured in (2^{m-j}-1)^n ways. To count colourings where each column has at least one black square we apply the principle of inclusion-exclusion and arrive at the above result.

Another inclusion-exclusion example shown here counts the number of 3 by 3 black/white grids in which there is no 2 by 2 black square. The answer is 417 with more terms for n by n grids in OEIS A139810.

Example 5: Suppose we wish to count the number of colourings of an m by n grid in which row i has k_i black squares and column j has l_j black squares (i = 1, 2, \ldots m, j = 1, 2, \ldots, n). Following [1], the number of ways this can be done is the coefficient of x_1^{k_1}x_2^{k_2} \ldots x_m^{k_m}y_1^{l_1}y_2^{l_2}\ldots y_n^{l_n} in the polynomial

\displaystyle \prod_{i=1}^m \prod_{j=1}^n (1 + x_i y_j).

To see this note that expanding the product gives products of terms of the form (x_i y_j) where such a term included corresponds to the i‘th row and jth column being coloured black. Hence the coefficient of x_1^{k_1}x_2^{k_2} \ldots x_m^{k_m}y_1^{l_1}y_2^{l_2}\ldots y_n^{l_n} is the number of ways in which the system \sum_{j=1}^n a_{ij} = k_i, \sum_{i=1}^m a_{ij} = l_j has a solution (i = 1, 2, \ldots m, j = 1, 2, \ldots, n) for a_{ij} equal to 1 if and only if row i and column j are coloured black and 0 otherwise.

Let us evaluate this in the special case of 2 black squares in every row and every column for an n by n square grid (i.e. k_i = l_j = 2 and m = n). Picking two squares in each column to colour black means viewing the expansion as a polynomial in y_1, \ldots, y_n the coefficient of y_1^2y_2^2\ldots y_n^2 has sums of products of n terms of the form x_ix_j. Then using [] notation to denote the coefficient of an expression, we have

\begin{aligned} \left[x_1^2x_2^2 \ldots x_n^2y_1^2y_2^2\ldots y_n^2 \right]  \prod_{i=1}^n \prod_{j=1}^n (1 + x_i y_j) &= \left[x_1^2x_2^2 \ldots x_n^2 \right] \left( \sum_{i=1}^n\sum_{j=i+1}^n x_i x_j \right)^n\\&= \left[x_1^2x_2^2 \ldots x_n^2 \right] 2^{-n} \left( \left( \sum_{i=1}^n x_i\right)^2 - \sum_{i=1}^n x_i^2 \right)^n\\ &= \left[x_1^2x_2^2 \ldots x_n^2 \right] 2^{-n} \sum_{k=0}^n (-1)^k \binom{n}{k} \left( \sum_{i=1}^n x_i^2 \right)^k\left(\sum_{i=1}^n x_i\right)^{2(n-k)}\\ &=  2^{-n}  \sum_{k=0}^n (-1)^k \binom{n}{k} \frac{n!}{(n-k)!} \frac{(2n-2k)!}{2^{n-k}}\\ &= 4^{-n}  \sum_{k=0}^n (-1)^k \binom{n}{k}^2 2^k  (2n-2k)!. \end{aligned}

Here the second last line follows from considering the number of ways that products of k terms of the form x_i^2 arise in the product \left( \sum_{i=1}^n x_i^2 \right)^k (which is \frac{n!}{(n-k)!}) and products of (n-k) terms of the form x_i^2 can be formed in the product \left(\sum_{i=1}^n x_i\right)^{2(n-k)} (which is \frac{(2n-2k)!}{2^{n-k}}).

For example, when n=4 this is equivalent to finding the coefficient of a^2b^2c^2d^2 in (ab + bc + ac + bc + bd + cd)^4. Products are either paired up in complementary ways such as in ab.ab.cd.cd (3 \times \binom{4}{2} = 18 ways) or we have the three products ab.bc.cd.ad, ab.bd.cd.ad, ac.bc.bd.ad (3 \times 4! = 72 ways). This gives us a total of 90 (this question appeared in the 1992 Australian Mathematics Competition). More terms of the sequence are found in OEIS A001499 and the 6 by 4 case (colouring two shaded squares in each row and three in each column in 1860 ways) appeared in the 2007 AIME I (see Solution 7 here).

Example 6: If we wish to count the number of grid configurations in which reflections or rotations are considered equivalent, we may make use of Burnside’s lemma that the number of orbits of a group is the average number of points fixed by an element of the group. For example, to find the number of configurations of 2 by 2 grids up to rotational symmetry, we consider the cyclic group C_4. For quarter turns there are 2^4 configurations fixed (a quadrant determines the colouring of the remainder of the grid) while for half turns there are 2^8 configurations as one half determines the colouring of the other half. This gives us an answer of

\displaystyle \frac{2^{16} + 2.2^4 + 2^8}{4} = 16456,

which is part of OEIS A047937. If reflections are also considered equivalent we need to consider the dihedral group D_4 and we arrive at the sequence in OEIS A054247.

If we want to count the number of 3 by 3 grids with four black squares up to equivalence, this is equivalent to the number of full noughts and crosses configurations. A nice video by James Grime explaining this is here (the answer is 23).

Example 7: The number of ways of colouring an m by n grid black or white so that the regions form 2 by 1 dominoes has the amazing form

\displaystyle 2^{mn/2} \prod_{j=1}^{\lceil m/2 \rceil} \prod_{k=1}^{\lceil n/2 \rceil} \left(4 \cos^2 \frac{\pi j}{m+1} + 4 \cos^2 \frac{\pi k}{n+1}\right).

For example, the 36 ways of tiling a 4 by 4 grid are given here. A proof of the above formula using the Pfaffian of the adjacency matrix of the corresponding grid graph is given in chapter 10 of [2].

 

References

[1] L. Comtet, Advanced Combinatorics: The Art of Finite and Infinite Expansions (pp 235-6), D. Reidel Publishing Company, 1974.

[2] M. Aigner, A Course in Enumeration, Springer, 2007.

Advertisements

December 26, 2017

The evolution of ODI team totals

Filed under: cricket,sport — ckrao @ 11:46 am

Over the years one day international cricket scores have been on the rise and this post intends to look into this in some detail. We shall restrict ourselves to first innings scores where the team batting first lasted exactly 50 overs. Hence games greater than 50 overs per team long or where a team was bowled out prematurely are omitted. There are 2349 (out of 3945) such matches according to this query on Cricinfo Statsguru and on average  7 wickets fall over the 50 overs. The plot below shows a scatter plot of the scores over time. The red curve shows that mean scores were steady around 225 during the 1980s and have been on the rise since 1990 so that now the mean score is approaching 300.

scorevstime

Note that the first data point in 1974 corresponds to a game that was reduced to 50 overs per side after originally intended to be a 55 over game.

If we slice the data into eras marked by calendar years of roughly equal numbers of games, the mean score had a slight slow-down in the rate of increase from 2008-2012, then accelerated again in the past five years.

Era Number of matches Mean score batting first
1974-1994 427 229
1995-1999 383 247
2000-2003 368 257
2004-2007 380 267
2008-2012 393 272
2013-2017 398 288
1974-2017 2349 260

The histograms below show how rarely teams score less than 200 runs in recent times when using the full quota of 50 overs. In fact these days a team is more likely to score over 400 than below 200 if using the full quota of 50 overs!

runtotals

Comparing the distribution of first innings winning versus losing scores we find that the mean scores are 275 vs 236 respectively with sample sizes 1392 vs 901 (34 games had no result and 22 were tied). Restricting to the past five years, the median score batting first for the full 50 overs in winning matches is exactly 300.

winvsloss

Interestingly if we break down the runs scatter plot by team, the trends are not the same across the board. In particular England and South Africa have had more dramatic increases in recent times than the other teams, especially compared with India, Pakistan, Sri Lanka and West Indies.

runtotalsbycountry

Restricting to the last five years (2013-2017), here are the mean first innings scores for each team based on the match result (assuming they bat the full 50 overs).

Team Result mean score # matches
Afghanistan lost 249 6
Afghanistan won 260 12
Australia lost 295 13
Australia n/r 253 3
Australia won 310 31
Bangladesh lost 263 16
Bangladesh won 275 15
Canada lost 230 3
England lost 282 13
England won 329 22
Hong Kong won 283 4
India lost 282 12
India won 310 27
Ireland lost 244 6
Ireland tied 268 1
Ireland won 289 3
Kenya lost 260 1
Netherlands lost 265 1
New Zealand lost 277 12
New Zealand tied 314 1
New Zealand won 308 27
P.N.G. lost 218 2
P.N.G. won 232 1
Pakistan lost 266 9
Pakistan n/r 296 1
Pakistan tied 229 1
Pakistan won 290 20
Scotland lost 238 6
Scotland won 284 8
South Africa lost 258 7
South Africa n/r 301 1
South Africa won 321 36
Sri Lanka lost 249 15
Sri Lanka n/r 268 2
Sri Lanka tied 286 1
Sri Lanka won 305 22
U.A.E. lost 279 3
U.A.E. won 267 3
West Indies lost 265 10
West Indies won 298 10
Zimbabwe lost 247 9
Zimbabwe tied 257 1
Zimbabwe won 276 1

The England and South Africa numbers stand out the most here in winning causes. Also Australia has a particularly high average score of 294 in losing causes. Sri Lanka has the largest difference (56 runs) between average winning and losing scores.

Edit: The following shows the mean scores in the 100 matches prior to and after key rule changes (still focusing on first innings 50-over scores). Note that in two of the three cases, the average scores reduced.

  1. Restriction of 2 outside the 30-yard circle in the first 15 overs (’92 World Cup)
    03 Jan 88 to 20 Jan 92: 231
    12 Feb 92  to 16 Feb 94: 222
  2. Introduction of Powerplay overs
    13 Mar 04 to 30 Jun 05: 267
    07 Jul 05 to 08 Sep 06: 267
  3. Removal of powerplay, fifth fielder allowed outside the circle in the last ten overs
    17Aug 14 to 24 Jun 15: 301
    10 Jul 15 to 19 Jan 17: 289

September 8, 2017

Notes on von Neumann’s algebra formulation of Quantum Mechanics

Filed under: mathematics,science — ckrao @ 9:49 pm

The Hilbert space formulation of (non-relativistic) quantum mechanics is one of the great achievements of mathematical physics. Typically in undergraduate physics courses it is introduced as a set of postulates (e.g. the Dirac-von Neumann axioms) and hard to motivate without some knowledge of functional analysis or at least probability theory.  Some of that motivation and the connection with probability theory is summarised in the notes here – in fact it can be said that quantum mechanics is essentially non-commutative probability theory [2]. Furthermore having an algebraic point of view seems to provide a unified picture of classical and quantum mechanics.

The important difference between classical and quantum mechanics is that in the latter, the order in which measurements are taken sometimes matters. This is because obtaining the value of one measurement can disturb the system of interest to the extent that a consistently precise value of the other cannot be found. A famous example is position and momentum of a quantum particle – the Heisenberg uncertainty relation states that the product of their uncertainties (variances) in measurement is strictly greater than zero.

If measurements are treated as real-valued functions of the state space of system, we will not be able to capture the fact that the measurements do not commute. Since linear operators (e.g. matrices) do not commute in general, we use algebras of operators instead. We make use of the spectral theory leading from a special class of algebras with norm and adjoint known as von Neumann algebras which in turn are a special case of C*-algebras. The spectrum of an operator A is the set of numbers \lambda for which (A-\lambda I) does not have an inverse. Self-adjoint operators have a real spectrum and will represent the set of values that an observable (a physical variable that can be measured) can take. Hence we have this correspondence between self-adjoint operators and observables.

By the Gelfand-Naimark theorem C*-algebras can be represented as bounded operators on a Hilbert space {\cal H}. See Section II.6.4 of [3] for proof details. If the C*-algebra is commutative the representation is as continuous functions on a locally compact Hausdorff space that vanish at infinity. Furthermore we assume the C*-algebra and corresponding Hilbert space are separable, meaning the space contains a countable dense subset (analogous to how the subset of rationals are dense in the set of real numbers). This ensures that the Stone-von Neumann theorem holds which was used to show that the Heisenberg and Schrödinger pictures of quantum physics are equivalent [see pp7-8 here].

The link between C*-algebras and Hilbert spaces is made via the notion of a state which is a positive linear functional on the algebra of norm 1. A state evaluated on a self-adjoint operator outputs a real number that will represent the expected value of the observable corresponding to that operator. Note that it is impossible to have two different states that have the same expected values across over observables. A state \omega is called pure if it is an extreme point on the boundary of the (convex) space of states. In other words, we cannot write a pure state \omega as \omega = \lambda \omega_1 + (1-\lambda) \omega_2 where \omega_1 \neq \omega_2 are states and 0 < \lambda < 1). A state that is not pure is called mixed.

Now referring to a Hilbert space {\cal H}, for any mapping \Phi of bounded operators B({\cal H}) to expectation values such that

  1. \Phi(I) = 1 (it makes sense that the identity should have expectation value 1),
  2. self-adjoint operators are mapped to real numbers with positive operators (those with positive spectrum) mapped to positive numbers and
  3. \Phi is continuous with respect to the strong convergence in B({\cal H}) – i.e. if \lVert A_n \psi - A \psi \rVert \rightarrow 0 for all \psi \in H, then \Phi (A_n) \rightarrow \Phi (A),

then there is a is a unique self-adjoint non-negative trace-one operator \rho (known as a density matrix) such that \Phi (A) = \text{trace}(\rho A) for all A \in B(H) (see [1] Proposition 19.9). (The trace of an operator A is defined as \sum_k \langle e_k, Ae_k \rangle where \{e_k \} is an orthonormal basis in the separable Hilbert space – in the finite dimensional case it is the sum of the operator’s eigenvalues.) Hence states are represented by positive self-adjoint operators with trace 1. Such operators are compact and so have a countable orthonormal basis of eigenvectors.

When \rho corresponds to a projection operator onto a one-dimensional subspace it has the form \rho = vv^* where v \in {\cal H} and \lVert v \rVert = 1. In this case we can show \text{trace}(\rho A) = \langle v, Av \rangle = v^*Av, which recovers the alternative view that unit vectors of {\cal H} correspond to states (known as vector states) so that the expected value of an observable corresponding to the operator A is \langle v, Av \rangle. This is done by choosing the orthonormal basis \{e_k \} where e_1 = v and computing

\begin{aligned} \text{trace}(\rho A) &= \sum_k \langle e_k, vv^*Ae_k \rangle\\ &= \sum_k e_k^* v v^* Ae_k\\ &= e_1^* e_1 e_1^*Ae_1 \quad \text{ (as }e_k^*v = \langle e_k, v \rangle = 0\text{ for } k > 1\text{)}\\ &= e_1^*Ae_1\\ &= \langle v, Av \rangle. \end{aligned}

Trace-one operators \rho can be written as a convex combination of rank one projection operators: \rho = \sum \lambda_k v_k v_k^*. From this it can be shown that those density operators which cannot be written as a convex combination of other states (called pure states) are precisely those of the form \rho = vv^*. Hence vector states and pure states are equivalent notions. Mixed states can be interpreted as a probabilistic mixture (convex combination) of pure states.

Let us now look at the similarity with probability theory. A measure space is a triple (X, {\cal S}, \mu) where X is a set, {\cal S} is a collection of measurable subsets of X called a \sigma-algebra and \mu:{\cal S} \rightarrow \mathbb{R} \cup \infty is a \sigma-additive measure. If g is a non-negative integrable function with \int g \ d\mu = 1 it is called a density function and then we can define a probability measure p_g:{\cal S} \rightarrow [0,1] by

\displaystyle p_g(S) = \int_S  g\ d\mu \in [0,1], S \in {\cal S}.

A random variable f:X\rightarrow \mathbb{R} maps elements of a set to real numbers in such a way that f^{-1}(B) \in {\cal S} for any Borel subset of \mathbb{R}. This enables us to compute their expectation with respect to the density function g as

\displaystyle \int_X f \ dp_g = \int_X fg\ d\mu.

This is like the quantum formula \text{Tr}(\rho A) with our density operator \rho playing the role of g and operator A playing the role of random variable f. Hence a probability density function is the commutative probability analogue of a quantum state (density operator).

While Borel sets are the events from which we define simple functions and then random variables, in the non-commutative case we define operators in terms of projections (equivalently closed subspaces) of a Hilbert space {\cal H}. A projection operator P is self-adjoint, satisfies P^2 = P and has the discrete spectrum \{0,1\}. Hence they are analogous to 0-1 indicator random variables, the answers to yes/no events. For any unit vector v \in {\cal H} the expected value

\displaystyle \langle v, Pv \rangle = \langle v, P^2v \rangle = \langle Pv, Pv \rangle = \lVert Pv \rVert^2

is interpreted as the probability the observable corresponding to P will have value 1 when measured in the state corresponding to v. In particular this probability will be 1 if and only if v is in the invariant subspace of P. We define meet and join operations \vee, \wedge on these closed subspaces to create a Hilbert lattice ({\cal P}({\cal H}), \vee, \wedge, \perp):

  • A \wedge B = A \cap B
  • A \vee B = \text{closure of } A + B
  • A^{\perp} = \{u: \langle u,v \rangle = 0\ \forall v \in A\}

Borel sets form a \sigma-algebra in which the distributive law A \cap (B \cup C) = (A \cap B) \cup (A \cap C) holds for any elements of {\cal S}. However in the Hilbert lattice the corresponding rule A \wedge (B \vee C) = (A \wedge B) \vee (A \wedge C) (where A, B, C are projection operators) only holds some of the time (see here for an example). This failure of the distributive law is equivalent to the general non-commutativity of projections.

A quantum probability measure \phi:{\cal P} \rightarrow [0,1] can be defined by combining projections in a \sigma-additive way, namely \phi(0) = 0, \phi(I) = 1 and \phi(\vee_i P_i) = \sum_i \phi(P_i) where P_i are mutually orthogonal projections (P_i \leq P_j^{\perp}, i \neq j). Gleason’s theorem says that for Hilbert space dimension at least 3 a state is uniquely determined by the values it takes on the orthogonal projections – a quantum probability measure can be extended from projections to bounded operators to obtain \phi(A) = \text{Tr}(\rho_{\phi} A), similar to how characteristic functions are extended to integrable functions. Hence this is a key result for non-commutative integration (note: the continuity conditions defining \Phi in 1-3 above are stronger). We choose von Neumann algebras over C*-algebras since the former contain all spectral projections of their self-adjoint elements while the latter may not [ref].

So far we have seen that expected values of observables A are derived via the formula \text{Tr}(\rho A). To derive the distribution itself, we make of the spectral theorem and for self-adjoint operators with continuous spectrum this requires projection valued measures. A self-adjoint operator A has a corresponding function E_A:{\cal S} \rightarrow {\cal P}({\cal H}) mapping Borel sets to projections so that E_A(S) represents the event that the outcome of measuring observable A is in the set S: we require that E_A(X) = I and S \mapsto \langle u,E_A(S)v \rangle is a complex additive function (measure) for all u, v \in {\cal H}. We use E_A(\lambda) as shorthand for E_A(\{x:x\leq \lambda\}). Similar to the way a finite dimensional self-adjoint matrix M may be eigen-decomposed in terms of its eigenvalues \lambda_i and normalised eigenvectors u_i as

\begin{aligned} M &= \sum_i \lambda_i u_i u_i^T \\ &= \sum_i \lambda_i P_i \quad \text{(where }P_i := u_i u_i^T \text{ is a projection)}\\ &= \sum_i \lambda_i (E_i - E_{i-1}), \quad \text{(where } E_i := \sum{k \leq i} P_k\text{ ),} \end{aligned}

the spectral theorem for more general self-adjoint operators allows us to write

A = \int_{\sigma(A)} \lambda dE_A(\lambda)

which means that for every u, v \in {\cal H},

\langle u, Av \rangle = \int_{\sigma(A)} \lambda d\langle u,E_A v \rangle.

Here, the integrals are over the spectrum of A. Through this formula we can work with functions of operators and in particular the distribution of the random variable X corresponding to operator A in state \rho will be

\text{Pr}(X \leq x) = E\left[ 1_{\{X \leq x\} }\right] = \text{Tr} \left( \rho\int_{-\infty}^x dE_A(\lambda) \right) = \text{Tr} \left( \rho E_A(x) \right).

The similarities we have seen here between classical probability and quantum mechanics are summarised in the table below, largely taken from [2] which greatly aided my understanding. Note how the pairing between trace class and bounded operators is analogous to the duality of L^1 and L^{\infty} functions.

Classical Probability
Quantum Mechanics
(non-commutative probability)
(X,{\cal S}, \mu) – measure space ({\cal H}, {\cal P}({\cal H}), \text{Tr}) – Hilbert space model of QM
X – set {\cal H} – Hilbert space
{\cal S} – Boolean algebra of Borel subsets of X called events {\cal P}({\cal H})orthomodular lattice of projections (equivalently closed subspaces) of {\cal H}
disjoint events orthogonal projections
\mu:{\cal S} \rightarrow {\mathbb R}^{+} \cup \infty\sigma-additive positive measure \text{Tr} – functional
g \in L^1(X,\mu), g \geq 0, \int g \ d\mu = 1 – integrable functions (probability density functions) \rho \in {\cal T}({\cal H}), \rho \geq 0, \text{Tr}(\rho) = 1 – trace class operators (density operators)
p_g(S) = \int \chi_S g\ d\mu \in [0,1], S \in {\cal S}probability measure mapping Borel sets to numbers in [0,1] in a sigma-additive way \phi(S) = \text{Tr}(\rho_{\phi } S) \in [0,1], \rho_{\phi } \in {\cal T}({\cal H}), S \in {\cal P}({\cal H})quantum state mapping projections to numbers in [0,1] in a sigma-additive way
f \in L^{\infty}(X,\mu) – essentially bounded measurable functions (bounded random variables) A \in {\cal B}({\cal H}) – von Neumann algebra of bounded operators (bounded observables)
\int fg\ d\mu, g \in L^1(X,\mu) – expectation value of f \in L^{\infty}(X,\mu) with respect to p_g

\text{Tr}(\rho A), \rho \in {\cal T}({\cal H}) – expectation value of A \in {\cal B}({\cal H}) in state \rho

In summary, the fact that measurements don’t always commute lead us to consider non-commutative operator algebras. This leads us to the Hilbert space representation of quantum mechanics where a quantum state is a trace-one density operator and an observable is a bounded linear operator. We also saw that projections can be viewed as 0-1 events. The spectral theorem is used to decompose operators into a sum or integral of projections.

The richer mathematical setting for quantum mechanics allows us to model non-classical phenomena such as quantum interference and entanglement. We have not mentioned the time evolution of states, but in short, state vectors evolve unitarily according to the Schrödinger equation, generated by an operator known as the Hamiltonian.

References and Further Reading

[1] Hall, B.C., Quantum Theory for Mathematicians, Springer, Graduate Texts in Mathematics #267, June 2013 (relevant section)

[2] Redei, M., Von Neumann’s work on Hilbert space quantum mechanics

[3] Blackadar, B., Operator Algebras: Theory of C*-Algebras and von Neumann Algebras

[4] Wilce, Alexander, “Quantum Logic and Probability Theory“, The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), Edward N. Zalta (ed.).

[5] Wikipedia – Quantum logic

[6] Planetmath.org – Lattice of Projections

[7] Planetmath.org – Spectral Measure

[8] quantum mechanics – Intuitive meaning of Hilbert Space formalism – Physics Stack Exchange

[9] This answer to: mathematical physics – Quantum mechanics in a metric space rather than in a vector space, possible? – Physics Stack Exchange

[10] functional analysis – Resolution of the identity (basic questions) – Mathematics Stack Exchange

August 27, 2017

Busy roads of Melbourne

Filed under: geography — ckrao @ 11:16 am

VicRoads has a large collection of open data, which include traffic count estimates for main roads (excluding toll roads). I have taken the kml file from that link, colour coded the roads by two-way counts and shown only those with at least 35,000 vehicles per day (two-way traffic, 2017 estimates, averaged over a year) so that 1,465 road segments are shown. The results are mapped below (click on a road for count information).

As expected the main freeways carry the most traffic with the West Gate Freeway near the Western Link (CityLink) carrying the maximum average of 196,000 vehicles per day. The busiest segment of non-freeway is a stretch of Kings Way between Albert Road and Queens Road (99,000 vehicles per day).

June 30, 2017

The ballot problem and Catalan’s triangle

Filed under: Uncategorized — ckrao @ 10:15 pm

The ballot problem asks for the probability that candidate A is always ahead of candidate B during a tallying process if they respectively end up with p and q votes where p > q. For example if p = 2, q = 1 there are 3 ways in which the three votes are counted (AAB, ABA, BAA) but the only favourable outcome in which A remains ahead throughout is occurs if the tally appears as AAB. Hence the probability A remains ahead is 1/3.

If there are no restrictions, the number of ways the votes are tallied is the binomial coefficient \binom{p+q}{p}. The number of favourable outcomes (the numerator of the desired probability) in which A remains ahead can counted recursively in a similar way to Pascal’s triangle (each number the sum of the two neighbours above it) except no number may appear to the left of the vertical midline, as illustrated below. For example, the second element of the fifth row (3) corresponds to the case p = 3, q = 1 (AAAB, AABA, ABAA). More generally, dividing into the cases where the final vote is A or B, the number of ways N_{p,q} in which A remains ahead of B is equal to N_{p-1,q} + N_{p,q-1} where N_{p,q} = 0 if q \geq p. This sequence appears as A008313 in the OEIS and is the reversed form of Catalan’s triangle.

Catalan_tri

A way of generating the general term is making use of a beautiful reflection principle that gives a 1-1 correspondence between the number of tallies leading to a tie at some point and the number of tallies in which the first vote goes to candidate B: simply interchange A with B for all votes up to and including that tie. This amounts to reflecting the random walk about the midline, as illustrated below with the blue path corresponding to ABAA and the the red path BAAA.

pathreflection

Since p > q, the probability candidate A always leads is 1 minus the probability the sequence ties at some point. But the bijection above shows an equal number of these start with A and with B, so our desired probability is

\displaystyle 1 - 2 \text{Pr(sequence starts with B)} = 1 - 2\frac{q}{p+q} = \frac{p-q}{p+q}.

The numbers in the triangle are also formed by differences of adjacent entries of Pascal’s triangle, namely row p+q has terms of the form

\displaystyle \begin{aligned} N_{p,q} &= \binom{p+q}{p}\frac{p-q}{p+q}\\ &= \frac{(p+q-1)!(p-q)}{p!q!}\\&= \binom{p+q-1}{q}-\binom{p+q-1}{p}.\end{aligned}

This can be interpreted as the number of unrestricted sequences with p As and q Bs of length (p+q) that start with A minus the corresponding number that start with B, again following from the reflection principle.

As an aside, looking at the bottom row above we see N_{8,6} = N_{10,4} = 429, or equivalently

\displaystyle \binom{13}{4} - \binom{13}{3} = \binom{13}{6} - \binom{13}{5} = 429.

Finally we note that the Catalan numbers arise from the following parts of the triangle above:

  • as entries in the first column (counting Dyck paths)
  • as the sum of squares of each row
  • as the sum of entries in NE-SW diagonals

Catalan’s triangle can be generalised to a trapezium in which we count the number of strings consisting of n As and k Bs such that in every initial segment of the string the number of Bs does not exceed the number of As by m or more.

April 9, 2017

Highest aggregates and averages after n test matches/innings

Filed under: cricket,sport — ckrao @ 9:14 am

Soon after the 2017 India-Australia test series I noticed that among players who have played 54 tests, nobody has scored more than Steven Smith’s 5251 runs. Here is a list of the top three aggregates and averages after 54 tests and 100 innings.

54 tests 5251 SPD Smith (AUS) 5210 SM Gavaskar (INDIA) 4991 L Hutton (ENG) 4840 JB Hobbs (ENG)
54 tests 61.06 SPD Smith (AUS) 60.73 H Sutcliffe (ENG) 59.51 GS Sobers (WI) 59.02 JB Hobbs (ENG)
100 innings 5354 JB Hobbs (ENG) 5345 GS Sobers (WI) 5279 WR Hammond (ENG) 5251 SPD Smith (AUS)
100 innings 61.06 SPD Smith (AUS) 60.74 GS Sobers (WI) 60.68 WR Hammond (ENG) 58.48 L Hutton (ENG)

A more complete list of the top 10 scorers in these categories after n tests/innings is below. Statistics are from ESPN Cricinfo and are current to 9 January 2018. Corrections are welcome.

The following players have been ranked first for some n (as of 9 January 2018):

  • highest aggregate after n innings: Foster, Rowe, Javed Miandad, Kambli, Weekes, Bradman, Hobbs, Sobers, Smith, Hammond, Hutton, Sehwag, Tendulkar, Sangakkara, Lara
  • highest aggregate after n tests: Rowe, Foster, Javed Miandad, Gavaskar, Bradman, Smith, Sobers, Hutton, Sehwag, Sangakkara, Younis Khan, Lara, Ponting, Kallis, Dravid, Tendulkar
  • highest average after n innings: Foster, Rowe, Bell, Trott, Kambli, Gavaskar, Harvey, Bradman, Voges, Sutcliffe, Hobbs, Smith, Barrington, Sobers, Hammond, Hutton, Dravid, Tendulkar, Ponting, Sangakkara, Kallis
  • highest average after n tests: Rowe, Rudolph, Bell, Gavaskar, Kambli, Samaraweera, Harvey, Bradman, Voges, Sutcliffe, Smith, Hobbs, Hammond, Sobers, Dravid, Tendulkar, Ponting, Sangakkara, Kallis

March 19, 2017

Two similar geometry problems based on perpendiculars to cevians

Filed under: mathematics — ckrao @ 7:18 am

In this post I wanted to show a couple of similar problems that can be proved using some ideas from projective geometry.

The first problem I found via the Romantics of Geometry Facebook group: let M be the point of tangency of the incircle of \triangle ABC with BC and let E be the foot of the perpendicular from the incentre X of the \triangle ABC to AM. Then show EM bisects \angle BEC.

 

perpendicularcevian1

The second problem is motivated by the above and problem 2 of the 2008 USAMO: this time let AM be a symmedian of ABC and E be the foot of the perpendicular from the circumcentre X of \triangle ABC to AM. Then show that EM bisects \angle BEC.

perpendicularcevian2

Here is a solution to the first problem inspired bythat of Vaggelis Stamatiadis. Let the line through the other two points of tangency P, Q of the incircle with ABC intersect line BC at the point N as shown below. Note that since AP and AQ are tangents to the circle, line NPQ is the polar of A with respect to the incircle.

perpendicularcevian1a

Since N is on the polar of A, by La Hire’s theorem, A is on the polar of N. The polar of N also passes through M (as NM is a tangent to the circle at M). We conclude that the polar of N is the line through A and M.

Next, let MN intersect PQ at R. By theorem 5(a) at this link, the points (N, R, P, Q) form a harmonic range. Since the cross ratio of collinear points does not change under central projection,  considering the projection from A, (N,M,B,C) also form a harmonic range. (Alternatively, this follows from the theorems of Ceva and Menelaus using the Cevians intersecting at the Gergonne point and transveral NPQ). Also, NE \perp EM as both NI and IE are perpendicular to polar AM of N.

Considering a central projection from E of line NMBC to a line N', M, P', C' parallel to NE through M, we see that (N', M, P', C') form a harmonic range. Since N' is a point at infinity, this implies M is the midpoint of B'C' and so triangles B'EM and C'EM are congruent (equality of two pairs of sides and included angle is 90^{\circ}). Hence EM bisects \angle BEC as was to be shown.

perpendicularcevian1b

For the second problem, we use the following characterisation of a symmedian: AM extended concurs with the lines of tangency of the circumcircle at B and C. (For three proofs of this see here.)

perpendicularcevian2a

Define N as the intersection of XE with BC and D as the intersection of AM with the tangents at B, C. Note that line NBMC is the polar of D with respect to the circumcircle. By La Hire’s theorem, D must be on the polar of N. This polar is perpendicular to NX (the line joining N to the centre of the circle) and as ED \perp EX by construction of E, it follows that line AEMD is the polar of N. Again by theorem 5(a) in reference (2), (N, M, B, C) form a harmonic range. Following the same argument as the previous proof, this together with NE \perp EM imply EM bisects \angle BEC as required.

By similar arguments, one can prove the following, left to the interested reader. If X is the A-excentre of \triangle ABC, M the ex-circle’s point of tangency of BC, and E the foot of the perpendicular from X to line AM, then EM bisects \angle BEC.

perpendicularcevian3

References

(1) Alexander Bogomolny, Poles and Polars from Interactive Mathematics Miscellany and Puzzles http://www.cut-the-knot.org/Curriculum/Geometry/PolePolar.shtml, Accessed 19 March 2017

(2) Poles and Polars – Another Useful Tool! | The Problem Solver’s Paradise

(3) Yufei Zhao, Lemmas in Euclidean Geometry

December 24, 2016

Kohli’s 2016

Filed under: Uncategorized — ckrao @ 9:18 pm

Here is a list of the scores Virat Kohli made in Test, ODI, T20 and IPL cricket during 2016. Stunning numbers.

Test ODI T20I IPL
200 (283) 91 (97) 90* (55) 75 (51)
44 (90) 59 (67) 59* (33) 79 (48)
3 (8) 117 (117) 50 (36) 33 (30)
4 (17) 106 (92) 7 (12) 80 (63)
9 (10) 8 (11) 49 (51) 100* (63)
18 (40) 85* (81) 56* (47) 14 (17)
9 (28) 9 (13) 41* (28) 52 (44)
45 (65) 154* (134) 23 (27) 108* (58)
211 (366) 45 (51) 55* (37) 20 (21)
17 (28) 65 (76) 24 (24) 7 (7)
40 (95) 82* (51) 109 (55)
49* (98) 89* (47) 75* (51)
167 (267) 16 (9) 113 (50)
81 (109) 54* (45)
62 (127) 0 (2)
6* (11) 54 (35)
235 (340)
15 (29)
1215 @ 75.9 739 @ 92.37, SR 100 641 @ 106.8, SR 140.3 973 @ 81.2, SR 152.0

December 19, 2016

Some special functions and their applications

Filed under: mathematics — ckrao @ 9:55 am

Here are some notes on special functions and where they may arise. We consider functions in applied mathematics beyond field (four arithmetic operations), composition and inverse operations applied to the power and exponential functions.

1. Bessel and related functions

Bessel functions of the first (J_{\alpha}(x)) and second (Y_{\alpha}(x)) kind of order \alpha satisfy:

\displaystyle x^2 \frac{d^2 y}{dx^2} + x \frac{dy}{dx} + (x^2 - \alpha^2)y = 0.

Solutions for integer \alpha arise in solving Laplace’s equation in cylindrical coordinates while solutions for half-integer \alpha arise in solving the Helmholtz equation in spherical coordinates. Hence they come about in wave propagation, heat diffusion and electrostatic potential problems. The functions oscillate roughly periodically with amplitude decaying proportional to 1/\sqrt{x}. Note that Y_{\alpha}(x) is the second linearly independent solution when \alpha is an integer (for integer n, J_{-n}(x) = (-1)^n J_n(x)). Also, for integer n, J_n has the generating function

\displaystyle  \sum_{n=-\infty}^\infty J_n(x) t^n = e^{(\frac{x}{2})(t-1/t)},

the integral representations

\displaystyle J_n(x) = \frac{1}{\pi} \int_0^\pi \cos (n \tau - x \sin(\tau)) \,d\tau = \frac{1}{2 \pi} \int_{-\pi}^\pi e^{i(n \tau - x \sin(\tau))} \,d\tau

and satisfies the orthogonality relation

\displaystyle \int_0^1 x J_\alpha(x u_{\alpha,m}) J_\alpha(x u_{\alpha,n}) \,dx = \frac{\delta_{m,n}}{2} [J_{\alpha+1}(u_{\alpha,m})]^2 = \frac{\delta_{m,n}}{2} [J_{\alpha}'(u_{\alpha,m})]^2,

where \alpha > -1, \delta_{m,n} Kronecker delta, and u_{\alpha, m} is the m-th zero of J_{\alpha}(x).

Modified Bessel functions of the first (I_{\alpha}(x)) and second (K_{\alpha}(x)) kind of order \alpha satisfy:

\displaystyle x^2 \frac{d^2 y}{dx^2} + x \frac{dy}{dx} - (x^2 + \alpha^2)y = 0

(replacing x with ix in the previous equation).

The four functions may be expressed as follows.

\displaystyle J_{\alpha}(x) = \sum_{m=0}^\infty \frac{(-1)^m}{m! \, \Gamma(m+\alpha+1)} {\left(\frac{x}{2}\right)}^{2m+\alpha}

\displaystyle I_\alpha(x) = \sum_{m=0}^\infty \frac{1}{m! \, \Gamma(m+\alpha+1)} {\left(\frac{x}{2}\right)}^{2m+\alpha}

\displaystyle Y_\alpha(x) = \frac{J_\alpha(x) \cos(\alpha\pi) - J_{-\alpha}(x)}{\sin(\alpha\pi)}

\displaystyle K_\alpha(x) = \frac{\pi}{2} \frac{I_{-\alpha} (x) - I_\alpha (x)}{\sin (\alpha \pi)}

(In the last formula we need to take a limit when \alpha is an integer.)

Note that K and Y are singular at zero.

The Hankel functions H_\alpha^{(1)}(x) = J_\alpha(x)+iY_\alpha(x) and H_\alpha^{(2)}(x) = J_\alpha(x)-iY_\alpha(x) are also known as Bessel functions of the third kind.

The functions J_\alphaY_\alpha, H_\alpha^{(1)}, and H_\alpha^{(2)} all satisfy the recurrence relations (using Z in place of any of these four functions)

\displaystyle \frac{2\alpha}{x} Z_\alpha(x) = Z_{\alpha-1}(x) + Z_{\alpha+1}(x),
\displaystyle 2\frac{dZ_\alpha}{dx} = Z_{\alpha-1}(x) - Z_{\alpha+1}(x).

Bessel functions of higher orders/derivatives can be calculated from lower ones via:

\displaystyle \left( \frac{1}{x} \frac{d}{dx} \right)^m \left[ x^\alpha Z_{\alpha} (x) \right] = x^{\alpha - m} Z_{\alpha - m} (x),
\displaystyle \left( \frac{1}{x} \frac{d}{dx} \right)^m \left[ \frac{Z_\alpha (x)}{x^\alpha} \right] = (-1)^m \frac{Z_{\alpha + m} (x)}{x^{\alpha + m}}.

In particular, note that -J_1(x) is the derivative of J_0(x).

The Airy functions of the first (Ai(x)) and second (Bi(x)) kind satisfy

\displaystyle \frac{d^2y}{dx^2} - xy = 0.

This arises as a solution to Schrödinger’s equation for a particle in a triangular potential well and also describes interference and refraction patterns.

2. Orthogonal polynomials

Hermite polynomials (the probabilists’ defintion) can be defined by:

\displaystyle \mathit{He}_n(x)=(-1)^n e^{\frac{x^2}{2}}\frac{d^n}{dx^n}e^{-\frac{x^2}{2}}=\left (x-\frac{d}{dx} \right )^n \cdot 1,

and are orthogonal with respect to weighting function w(x) = e^{-x^2} on (-\infty, \infty).

They satisfy the differential equation

\displaystyle \left(e^{-\frac{x^2}{2}}u'\right)' + \lambda e^{-\frac{1}{2}x^2}u = 0

(where \lambda is forced to be an integer if we insist u be polynomially bounded at \infty)

and the recurrence relation

\displaystyle {\mathit{He}}_{n+1}(x)=x{\mathit{He}}_n(x)-{\mathit{He}}_n'(x).

The first few such polynomials are 1, x, x^2-1, x^3-3x, \ldots. The Physicists’ Hermite polynomials H_n(x) are related by H_n(x)=2^{\tfrac{n}{2}}{\mathit{He}}_n(\sqrt{2} \,x) and arise for example as the eigenstates of the quantum harmonic oscillator.

Laguerre polynomials are defined by

\displaystyle L_n(x)=\frac{e^x}{n!}\frac{d^n}{dx^n}\left(e^{-x} x^n\right) =\frac{1}{n!} \left( \frac{d}{dx} -1 \right) ^n x^n = \sum_{k=0}^n \binom{n}{k}\frac{(-1)^k}{k!} x^k,

and are orthogonal with respect to e^{-x} on (0,\infty).

They satisfy the differential equation

\displaystyle  xy'' + (1 - x)y' + ny = 0,

recurrence relation

\displaystyle L_{k + 1}(x) = \frac{(2k + 1 - x)L_k(x) - k L_{k - 1}(x)}{k + 1},

and have generating function

\displaystyle \sum_n^\infty  t^n L_n(x)=  \frac{1}{1-t} e^{-\frac{tx}{1-t}}.

The first few values are 1, 1-x, (x^2-4x+2)/2. Note also that L_{-n}(x)=e^xL_{n-1}(-x).

The functions come up as the radial part of solution to Schrödinger’s equation for a one-electron atom.

Legendre polynomials can be defined by

\displaystyle P_n(x) = {1 \over 2^n n!} {d^n \over dx^n } \left[ (x^2 -1)^n \right]

and are orthogonal with respect to the L^2 norm on (-1,1).

They satisfy the differential equation

\displaystyle {d \over dx} \left[ (1-x^2) {d \over dx} P_n(x) \right] + n(n+1)P_n(x) = 0,

recurrence relation

and have generating function

\sum_{n=0}^\infty P_n(x) t^n = \displaystyle \frac{1}{\sqrt{1-2xt+t^2}}.

The first few values are 1, x, (3x^2-1)/2, (5x^3-3x)/2.

They arise in the expansion of the Newtonian potential 1/|x-x'| (multipole expansions) and Laplace’s equation where there is axial symmetry (spherical harmonics are expressed in terms of these).

Chebyshev polynomials of the 1st kind T_n(x) can be defined by

T_n(x) =\begin{cases} \cos(n\arccos(x)) & \ |x| \le 1 \\ \frac12 \left[ \left (x-\sqrt{x^2-1} \right )^n + \left (x+\sqrt{x^2-1} \right )^n \right] & \ |x| \ge 1 \\ \end{cases}

and are orthogonal with respect to weighting function w(x) = 1/\sqrt{1-x^2} in (-1,1).

They satisfy the differential equation

\displaystyle (1-x^2)\,y'' - x\,y' + n^2\,y = 0,

the relations

\displaystyle T_{n+1}(x) = 2xT_n(x) - T_{n-1}(x)

\displaystyle (1 - x^2)T_n'(x) = -nx T_n(x) + n T_{n-1}(x)

and have generating function

\displaystyle \sum_{n=0}^{\infty}T_n(x) t^n = \frac{1-tx}{1-2tx+t^2}.

The first few values are 1, x, 2x^2-1, 4x^3-3x, \ldots. These polynomials arise in approximation theory, namely their roots are used as nodes in piecewise polynomial interpolation. The function f(x) = \frac1{2^{n-1}}T_n(x) is the polynomial of leading coefficient 1 and degree n where the maximal absolute value on (-1,1) is minimal.

Chebyshev polynomials of the 2nd kind U_n(x) are defined by

\displaystyle  U_n(x)  = \frac{\left (x+\sqrt{x^2-1} \right )^{n+1} - \left (x-\sqrt{x^2-1} \right )^{n+1}}{2\sqrt{x^2-1}}

and are orthogonal with respect to weighting function w(x) = \sqrt{1-x^2} in (-1,1).

They satisfy the differential equation

\displaystyle  (1-x^2)\,y'' - 3x\,y' + n(n+2)\,y = 0,

the recurrence relation

\displaystyle U_{n+1}(x) = 2xU_n(x) - U_{n-1}(x)

and have generating function

\displaystyle \sum_{n=0}^{\infty}U_n(x) t^n = \frac{1}{1-2 t x+t^2}.

The first few values are 1, 2x, 4x^2-1, 8x^3-4x, \ldots. (There are also less well known Chebyshev  polynomials of the third and fourth kind.)

Bessel polynomials y_n(x) may be defined from Bessel functions via

\displaystyle y_n(x)=\sqrt{\frac{2}{\pi x}}\,e^{1/x}K_{n+\frac 1 2}(1/x)  = \sum_{k=0}^n\frac{(n+k)!}{(n-k)!k!}\,\left(\frac{x}{2}\right)^k.

They satisfies the differential equation

\displaystyle x^2\frac{d^2y_n(x)}{dx^2}+2(x\!+\!1)\frac{dy_n(x)}{dx}-n(n+1)y_n(x)=0.

The first few values are 1, x+1, 3x^2+3x+1,\ldots.

3. Integrals

The error function has the form

\displaystyle \rm{erf}(x) = \frac{2}{\sqrt\pi}\int_0^x e^{-t^2}\,\mathrm dt.

This can be interpreted as the probability a normally distributed random variable with zero mean and variance 1/2 is in the interval (-x,x).

The cdf of the normal distribution $\Phi(x)$ is related to this via \Phi(x) = (1 + {\rm erf}(x/\sqrt{2})/2. Hence the tail probability of the standard normal distribution Q(x) is Q(x) = (1 - {\rm erf}(x/\sqrt{2}))/2.

Fresnel integrals are defined by

\displaystyle S(x) =\int_0^x \sin(t^2)\,\mathrm{d}t=\sum_{n=0}^{\infty}(-1)^n\frac{x^{4n+3}}{(2n+1)!(4n+3)}
\displaystyle C(x) =\int_0^x \cos(t^2)\,\mathrm{d}t=\sum_{n=0}^{\infty}(-1)^n\frac{x^{4n+1}}{(2n)!(4n+1)}

They have applications in optics.

The exponential integral {\rm Ei}(x) (used in heat transfer applications) is defined by

\displaystyle {\rm Ei}(x)=-\int_{-x}^{\infty}\frac{e^{-t}}t\,dt.

It is related to the logarithmic integral

\displaystyle {\rm li} (x) =   \int_0^x \frac{dt}{\ln t}

by \mathrm{li}(x) = \mathrm{Ei}(\ln x) (for real x).

The incomplete elliptic integral of the first, second and third kinds are defined by

\displaystyle F(\varphi,k) = \int_0^\varphi \frac {d\theta}{\sqrt{1 - k^2 \sin^2 \theta}}

\displaystyle E(\varphi,k) =  \int_0^\varphi \sqrt{1-k^2 \sin^2\theta}\, d\theta

 \displaystyle \Pi(n ; \varphi \setminus \alpha) = \int_0^\varphi  \frac{1}{1-n\sin^2 \theta} \frac {d\theta}{\sqrt{1-(\sin\theta\sin \alpha)^2}}

Setting \varphi = \pi/2 gives the complete elliptic integrals.

Any integral of the form \int_{c}^{x} R \left(t, \sqrt{P(t)} \right) \, dt, where c is a constant, R is a rational function of its arguments and P(t) is a polynomial of 3rd or 4th degree with no repeated roots, may be expressed in terms of the elliptic integrals. The circumference of an ellipse of semi-major axis a, semi-minor axis b and eccentricity e = \sqrt{1-b^2/a^2} is given by 4aE(e), where E(k) is the complete integral of the second kind.

(Some elliptic functions are related to inverse elliptic integral, hence their name.)

The (upper) incomplete Gamma function is defined by

\displaystyle \Gamma(s,x) = \int_x^{\infty} t^{s-1}\,e^{-t}\,{\rm d}t.

It satisfies the recurrence relation \Gamma(s+1,x)= s\Gamma(s,x) + x^{s} e^{-x}. Setting s= 0 gives the Gamma function which interpolates the factorial function.

The digamma function is the logarithmic derivative of the gamma function:

\displaystyle \psi(x)=\frac{d}{dx}\ln\Big(\Gamma(x)\Big)=\frac{\Gamma'(x)}{\Gamma(x)}.

Due the relation \psi(x+1) = \psi(x) + 1/x, this function appears in the regularisation of divergent integrals, e.g.

\sum_{n=0}^{\infty} \frac{1}{n+a}= - \psi (a).

The incomplete Beta function is defined by

\displaystyle B(x;\,a,b) = \int_0^x t^{a-1}\,(1-t)^{b-1}\,\mathrm{d}t.

When setting x=1 this becomes the Beta function which is related to the gamma function via

\displaystyle B(x,y)=\frac{\Gamma(x)\,\Gamma(y)}{\Gamma(x+y)}.

This can be extended to the multivariate Beta function, used in defining the Dirichlet function.

\displaystyle B(\alpha_1,\ldots,\alpha_K) = \frac{\Gamma(\alpha_1) \cdots \Gamma(\alpha_K)}{\Gamma(\alpha_1 + \ldots + \alpha_K)}.

The polylogarithm, appearing as integrals of the Fermi–Dirac and Bose–Einstein distributions, is defined by

\displaystyle {\rm Li}_s(z) = \sum_{k=1}^\infty \frac{z^k}{k^s} = z + \frac{z^2}{2^s} + \frac{z^3}{3^s} + \cdots

Note the special case {\rm Li}_1(z) = -\ln (1-z) and the case s=2 is known as the dilogarithm. We also have the recursive formula

\displaystyle {\rm Li}_{s+1}(z) = \int_0^z \frac {{\rm Li}_s(t)}{t}\,\mathrm{d}t.

4. Generalised Hypergeometric functions

All the above functions can be written in terms of generalised hypergeometric functions.

\displaystyle {}_pF_q(a_1,\ldots,a_p;b_1,\ldots,b_q;z) = \sum_{n=0}^\infty \frac{(a_1)_n\dots(a_p)_n}{(b_1)_n\dots(b_q)_n} \, \frac {z^n} {n!}

where (a)_n = \Gamma(a+n)/\Gamma(a) = a(a+1)(a+2)...(a+n-1) for n > 0 or (a)_0 = 1.

The special case p=q=1 is called a confluent hypergeometric function of the first kind, also written M(a;b;z).

This satisfies the differential equation (Kummer’s equation)

\displaystyle \left (z\frac{d}{dz}+a \right )w = \left (z\frac{d}{dz}+b \right )\frac{dw}{dz}.

The Bessel, Hankel, Airy, Laguerre, error, exponential and logarithmic integral functions can be expressed in terms of this.

The case p=2, q=1 is sometimes called Gauss’s hypergeometric functions, or simply hypergeometric functions. This satisfies the differential equation

\displaystyle \left (z\frac{d}{dz}+a \right ) \left (z\frac{d}{dz}+b \right )w =\left  (z\frac{d}{dz}+c \right )\frac{dw}{dz}.

The Legendre, Hermite and Chebyshev, Beta, Gamma functions can be expressed in terms of this.

Further reading

The Wolfram Functions Site

Wikipedia: List of mathematical functions

Wikipedia: List of special functions and eponyms

Wikipedia: List of q-analogs

Wikipedia Category: Orthogonal polynomials

Weisstein, Eric W. “Laplace’s Equation.” From MathWorld–A Wolfram Web Resource. http://mathworld.wolfram.com/LaplacesEquation.html

July 29, 2016

Distribution of Melbourne’s length of day

Filed under: geography — ckrao @ 10:34 pm
According to timeanddate.com, Melbourne, Australia in 2016 has a minimum daylength of 9 hours 32 minutes and 32 seconds, and a maximum daylength of 14 hours 47 minutes and 19 seconds (the asymmetry is due to the way day length is calculated). Here is a look at the distribution of day length through the year.
Duration of daylength (hrs) Dates Frequency
< 10 19 May-24 July 67
10-10.5 25 July-10 August, 2-18 May 34
10.5-11 11-24 August, 18 April-1 May 28
11-11.5 25 August-6 September, 5-17 April 26
11.5-12 6-19 September, 24 March-4 April 25
12-12.5 20 September-1 October, 12-23 March 24
12.5-13 1-13 October, 28 February-11 March 25
13-13.5 14-26 October, 16-27 February 25
13.5-14 9 Nov, 2-15 Feb 28
14-14.5 10 Nov-27 Nov, 16 Jan-1Feb 35
>14.5 28 Nov-15 Jan 49

What surprised me the most about this was that only 100 days of the year have daylength between 11 and 13 hours and we have a good 84 days with light longer than 14 hours.

Next Page »

Blog at WordPress.com.

%d bloggers like this: