Chaitanya's Random Pages

December 31, 2012

Tony Greig commentary

Filed under: cricket,sport — ckrao @ 10:26 pm

It was sad to hear about the recent passing of Tony Greig. I loved the excitement he brought to cricket through his enthusiastic commentary. Here are a few examples.

The best of Tony Greig

December 29, 2012

A curious fact involving factors

Filed under: mathematics — ckrao @ 10:07 pm

Here is one of my favourite elementary number theory results.

If $a, b, c, d$ are positive integers such that $ab = cd$, then $a + b + c + d$ is composite.

I like it because it is not immediately obvious that the relationship $ab = cd$ should be related to the primality of the sum of these four numbers. As an example $12 = 2 \times 6 = 3 \times 4$ and $2 + 6 + 3 + 4 = 15 = 5 \times 3$.

To prove this, let $g = gcd(a,c)$. Then we may write $a = a_1 g$ and $c = c_1 g$ where $a_1$ and $c_1$ have no common factors. In other words, $g$ extracts the common factors from $a$ and $c$. This leads to $ga_1 b = g c_1 d$ or $a_1 b = c_1 d$. Since $a_1$ and $c_1$ have no common factors, this means all the non-unit factors of $a_1$ divide $d$ (and all the non-unit factors of $c_1$ divide $b$). Then we may write $d = a_1 h$ where $h$ is a positive integer so that $b = c_1 d / a_1 = c_1 h$.

Then $\displaystyle a + b + c + d = a_1 g + c_1 h + c_1 g + a_1 h = (a_1 + c_1)(g + h)$

which is composite.

Note that we can similarly prove that $a^k + b^k + c^k + d^k$ is composite where $k$ is any positive integer.

December 23, 2012

Recap of the Australia-South Africa test match in Perth

Filed under: cricket,sport — ckrao @ 4:34 am

South Africa recently notched up an impressive 309-run win in the third and deciding cricket test to win the series 1-0 against Australia. After making a first innings score of 225 they fought back hard to have Australia reeling at 6/45 early on the second day. Dale Steyn caused much of that damage with two wickets in his first over of the day and then a brilliant delivery with just enough movement to dismiss the in-form Clarke cheaply. They dismissed Australia for 163 and then seized the initiative in a way I have rarely seen. After batting slowly in the final innings of the previous test to save the match they showed they can bat at the other end of the spectrum, putting away the Australian bowlers with classy batting. South Africa ended day 2 with 2/230 after just 38 overs, a lead of 292 which is amazing given their first innings score. Smith scored 84 off 100 balls while Amla was 99* off just 84 balls.

In my earlier post about the second test I mentioned the rapidity of the partnership between Clarke and Warner. Here we saw two rapid partnerships by the South Africans. Smith and Amla added 178 in 25.3 overs while de Villiers and du Plessis later added 112 in just 14.1 overs, with both partnerships featuring side by side in this list of fastest test partnerships of 100+. De Villiers raced from 50 to 150 in just 63 balls, including 3 cheeky reverse sweeps in a row to move from 89 to 101! Amla finished with 196 off 221 while de Villiers made 169 off 184 as South Africa finished with 569 at 5.08 runs per over. It’s interesting that two scores from the same series and by opposing sides fill the third and fourth spots in this list of the highest test run rates in a completed innings.

Australia were set 632 to win and were never going to get close, with an extremely even bowling performance from South Africa’s frontline bowlers dismissing them for 322 after an entertaining last wicket stand of 87. Ricky Ponting was not to get a fairy tale ending to his distinguished career, making just 4 and 8 for the match. Mitchell Starc interestingly scored the second fastest test half century by an Australian (32 balls).

Faf du Plessis in this game set a record for being the only player to score 70 or more in his first test three innings 78, 110*, and 78* (see this article in The Roar). The following is a list of others who have did well (60+) in each of their first three test innings.

• Sunil Gavaskar (Ind) scored 65, 67*, 116 in his first three innings, following it up with 64*.
• Herbert Sutcliffe (Eng) scored 64, 122, 83.
• Ian Bell (Eng) scored 70, 65*, 162*.
• Andrew Strauss (Eng) scored 112, 83, and 62.
• Herbie Collins (Aus) scored 70, 104, and 64.

December 12, 2012

Matrices that are easy to invert

Filed under: mathematics — ckrao @ 12:46 pm

Here are some classes of real non-singular matrices that are easier to invert than most. In an earlier post I had referred to involutory matrices that had the special property that they were equal to their inverse – hence they are the easiest types of matrix to invert!

• 1 by 1 and 2 by 2 matrices are easy to invert: $\displaystyle \left[ \begin{array}{cc} a & b \\ c & d \end{array} \right]^{-1} = \frac{1}{ad - bc} \left[ \begin{array}{cc} d & -b \\ -c & a \end{array} \right]$
• Diagonal matrices (including the identity matrix) are among the easiest to invert: $\displaystyle \left[ \begin{array}{cccc} d_1 & & & \\ & d_2 & & \\ & & \ddots & \\ & & & d_n \end{array} \right]^{-1} = \left[ \begin{array}{cccc} 1/d_1 & & & \\ & 1/d_2 & & \\ & & \ddots & \\ & & & 1/d_n \end{array} \right]$
• Orthogonal matrices $P$ satisfy $P^{-1} = P^T$. Special cases are permutation matrices and rotation matrices. $\displaystyle \left[ \begin{array}{cccc} 0 & 0 & 1 & 0 \\ 1 & 0 & 0 &0 \\ 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 \end{array} \right]^{-1} = \left[ \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{array} \right], \quad \left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & \cos \theta & -\sin \theta \\ 0 & \sin \theta & \cos \theta \end{array} \right]^{-1} = \left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & \cos \theta & \sin \theta \\ 0 & -\sin \theta & \cos \theta \end{array} \right]$
• A matrix with all 1s down its diagonal and non-zero entries in a single row or column are easily invertible (atomic triangular matrices are a special case): $\displaystyle \left[ \begin{array}{ccccc} 1 & a_1 & & & \\ & 1 & & & \\ & a_2 &1 & & \\ & \vdots & & \ddots & \\ & a_{n-1} & & & 1 \end{array} \right]^{-1} = \left[ \begin{array}{ccccc} 1 & -a_1 & & & \\ & 1 & & & \\ & -a_2 &1 & & \\ & \vdots & & \ddots & \\ & -a_{n-1} & & & 1 \end{array} \right]$
Other special cases of this are elementary matrices corresponding to the addition of a row to another: $\displaystyle \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 &0 \\ m & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right]^{-1} = \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ -m & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right]$
• Bidiagonal matrices with 1s down the main diagonal are inverted as follows: $\displaystyle \left[ \begin{array}{cccc} 1 & -a_3 & 0 & 0 \\ 0 & 1 & -a_2 & 0 \\ 0 & 0 & 1 & -a_1 \\ 0 & 0 & 0 & 1 \end{array} \right]^{-1} =\left[ \begin{array}{cccc} 1 & a_3 & a_3 a_2 & a_3 a_2 a_1 \\ 0 & 1 & a_2 & a_2 a_1 \\ 0 & 0 & 1 & a_1 \\ 0 & 0 & 0 & 1 \end{array} \right]$
A special case of this is that the sum matrix and difference matrix are inverses: $\displaystyle \left[ \begin{array}{cccc} 1 & -1 & 0 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 1 & -1 \\ 0 & 0 & 0 & 1 \end{array} \right]^{-1} =\left[ \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 \end{array} \right]$
Another special case is the following alternating matrix: $\displaystyle \left[ \begin{array}{cccc} 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 \end{array} \right]^{-1} =\left[ \begin{array}{cccc} 1 & -1 & 1 & -1 \\ 0 & 1 & -1 & 1 \\ 0 & 0 & 1 & -1 \\ 0 & 0 & 0 & 1 \end{array} \right]$
If one thinks about matrix inversion in terms of Gauss-Jordan elimination, keeping track of the order in which row/column operations can be done allow us to carry out matrix inversions such as the following: $\displaystyle \left[ \begin{array}{ccccc} 1 & 0 & 0 & 0 & 4 \\ 0 & 1 & 1 & 0 & 3\\ 0 & 0 & 1 & 2 & 0\\ 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 1 \end{array} \right]^{-1} =\left[ \begin{array}{ccccc} 1 & 0 & 0 & 0 & -4 \\ 0 & 1 & 1 & 0 & -3\\ 0 & 0 & 1 & -2 & 0\\ 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 1\end{array} \right], \quad \left[ \begin{array}{ccc} 1 & 0 & -2 \\ 0 & 1 & 0 \\ 0 & -3 & 1 \end{array} \right]^{-1} =\left[ \begin{array}{cccc} 1 & 6 & 2 \\ 0 & 1 & 0 \\ 0 & 3 & 1 \end{array} \right]$
• Integer-valued matrices that are equivalent (up to row or column operations which don’t change the determinant) to a triangle matrix with 1 or -1 as diagonal entries have inverses that are integer-valued. See  for more details.
• Matrices that are a sum of the identity matrix and a matrix with all entries equal to a constant are inverted as follows.
If $\displaystyle u = \left( \begin{array}{cccc} 1 & 1& \ldots & 1 \end{array} \right)$, then $\displaystyle \left( I + k u^T u \right)^{-1} = I - \frac{k}{1 + nk} u^T u$This is a special case of the formula $\displaystyle (I + AB)^{-1} = I - A(I + BA)^{-1} B$. For column vectors $x$ and $y$ this becomes $\displaystyle (I + xy^T)^{-1} = I - \frac{x y^T}{1 + x^T y}$.
• The inverse of the second difference matrix is \displaystyle \begin{aligned} \left[ \begin{array}{ccccc} 1 & -1 & 0 & \cdots & \\ -1 & 2 & -1 & \ddots & \\ 0 & -1 & 2 & -1 & \\ & \ddots & & \ddots & \\ & & 0 & -1 & 2 \end{array} \right]^{-1} &= \left[ \begin{array}{ccccc} n & n-1 & n-2 & \cdots & 1 \\ n-1 & n-1 & n-2 & \cdots & 1 \\ n-2 & n-2 & n-2 & \cdots & 1\\ \vdots & \vdots & & \ddots & \vdots \\ 1 & 1 & \cdots & & 1 \end{array} \right]\\ &= \left[ \begin{array}{ccccc} 1 & 1 & 1 & \cdots & 1 \\ 0 & 1 & 1 & \cdots & 1 \\ 0 & 0 & 1 & \cdots & 1\\ \vdots & \vdots & & \ddots & \vdots \\ 0 & 0 & \cdots & 0 & 1 \end{array} \right] \left[ \begin{array}{ccccc} 1 & 0 & 0 & \cdots & 0 \\ 1 & 1 & 0 & \cdots & 0 \\ 1 & 1 & 1 & & 0\\ \vdots & \vdots & & \ddots & \\ 1 & 1 & \cdots & 1 & 1 \end{array} \right].\end{aligned}
• Using identities such as $(AB)^{-1} = B^{-1} A^{-1}$, $(A^T)^{-1} = (A^{-1})^T$, $(aA)^{-1} = a^{-1} A^{-1}$ and $(I + A)^{-1} = A^{-1} (A^{-1} + I)^{-1}$, where $A$ and $B$ are easily invertible, leads to simplifications. For example, if $B$ is diagonal, $AB$ multiplies the columns of $A$ by the entries of $B$ and so $(AB)^{-1}$ will multiply the rows of $A^{-1}$ by the entries of $B^{-1}$.
• The identity $\displaystyle (A \otimes B)^{-1} = A^{-1} \otimes B^{-1}$ (where $\otimes$ is the Kronecker product) leads to identities such as $\displaystyle \left[ \begin{array}{cc} aI & bI \\ cI & dI\end{array} \right]^{-1} = \frac{1}{ad - bc} \left[ \begin{array}{cc} dI & -bI \\ -cI & aI\end{array} \right].$
• The following block matrix results also may be useful . $\displaystyle \left[ \begin{array}{cc} I & B \\ 0 & I\end{array} \right]^{-1} = \left[ \begin{array}{cc} I & -B \\ 0 & I\end{array} \right],\\ \left[ \begin{array}{cc} A & 0 \\ 0 & D\end{array} \right]^{-1} = \left[ \begin{array}{cc} A^{-1} & 0 \\ 0 & D^{-1}\end{array} \right], \\ \left[ \begin{array}{cc} A & B \\ 0 & D\end{array} \right]^{-1} = \left[ \begin{array}{cc} A^{-1} & -A^{-1}BD^{-1} \\ 0 & D^{-1}\end{array} \right],\\ \left[ \begin{array}{ccc} I & A & B \\ 0 & I & C\\ 0 & 0 & I \end{array} \right]^{-1} = \left[ \begin{array}{ccc} I & -A & AC-B \\ 0 & I & -C\\ 0 & 0 & I \end{array} \right]$

References

 R. Hanson, Integer Matrices Whose Inverse Contain Only Integers, The Two-Year College Mathematics Journal, Vol. 13, No. 1 (Jan., 1982), pp. 18-21.

 D. Bernstein, Matrix Mathematics: Theory, Facts, and Formulas, Princeton University Press, 2011.

Blog at WordPress.com.