# Chaitanya's Random Pages

## April 22, 2011

### Manipulating Fractions

Filed under: mathematics — ckrao @ 7:40 am

This post is partly inspired by a fractions quiz I recently tried on Sporcle. It made me think of the ways I know of manipulating fractions (whether they be rational numbers or algebraic expressions), so I decided to collect them here.

1. General form

$\displaystyle \frac{a}{b} \pm \frac{c}{d} = \frac{ad \pm bc}{bd}$

In particular $\displaystyle \frac{1}{a} \pm \frac{1}{b} = \frac{b \pm a}{ab}$, $\displaystyle \frac{1}{a} + \frac{1}{a} = \frac{2}{a} = \frac{1}{a/2}$ if a is even.

2. Find a small common multiple of b and d: let it be bd if you can’t think of anything smaller.

e.g. $\displaystyle \frac{3}{4} + \frac{1}{6} = \frac{3.3 + 1.2}{12} = \frac{11}{12}$

3. Equivalently, take out any common factor you see in the denominator.

$\displaystyle \frac{3}{4} + \frac{1}{6} = \frac{1}{2}\left(\frac{3}{2} + \frac{1}{3}\right) = \frac{1}{2}.\frac{11}{6} = \frac{11}{12}$

4. Of course you can also take out any common factor in the numerator:

e.g. $\displaystyle \frac{4}{8} + \frac{4}{7} = 4\left(\frac{1}{8} + \frac{1}{7}\right) = 4 \times \frac{15}{56} = \frac{15}{14}$

(Moral: the distributive law is your friend!)

5. It may help to simplify the fraction first:

e.g. $\displaystyle \frac{1}{4} + \frac{5}{10} =\frac{1}{4} + \frac{1}{2} = \frac{3}{4}$

If you are good with decimals or percentages, in some cases it’s easier to convert to that domain:

e.g. $\displaystyle \frac{2}{5} + \frac{1}{4} = 40\% + 25\% = 65\% = \frac{13}{20}$

Mixed fractions: separate the integer and fractional parts:

e.g. $\displaystyle 1 \frac{1}{2} - 3 \frac{3}{4} = \left(1 - 3\right) + \left(\frac{1}{2} - \frac{3}{4}\right) = -2 - \frac{1}{4} = -2\frac{1}{4} \left( = -\frac{9}{4}\right)$

Multiplication and division

6. Get rid of mixed fractions first! e.g. $\displaystyle 2 \frac{2}{3} = \frac{8}{3}$

7. General form

$\displaystyle \frac{a}{b} \times \frac{c}{d} = \frac{ac}{bd}$

$\displaystyle \frac{a}{b} \div \frac{c}{d} = \frac{a}{b} \times \frac{d}{c} = \frac{ad}{bc}$

8. Find common factors to allow for cancellation in the numerator and denominator

$\displaystyle \frac{ka}{kb} = \frac{a}{b}$

9. You can always multiply or divide both numerator and denominator by the same quantity.

e.g. $\displaystyle \frac{a/b}{c/b} = \frac{a}{c}$

Application 1: Rationalising the denominator.

e.g. $\displaystyle 1/\sqrt{2} = \sqrt{2}/2$

e.g. $\displaystyle \frac{1}{2-\sqrt{3}} = \frac{2 + \sqrt{3}}{\left(2 + \sqrt{3}\right)\left(2 - \sqrt{3}\right)} = \frac{2 + \sqrt{3}}{4-3} = 2 + \sqrt{3}$

Application 2: Manipulating factorials to work with combinatorial expressions.

e.g. $\displaystyle 10 \times 9 \times 8 \times 7 = \frac{10!}{6!} = 4! \binom{10}{4}$

Application 3: Manipulation of algebraic expressions

e.g. $\displaystyle \frac{\frac{1}{x} - \frac{1}{y}}{\frac{1}{x} + \frac{1}{y}} = \frac{y-x}{y+x}$

(Here we multiplied the numerator and denominator by xy to perform the above in one step.)

Equations

10. If you have an equation with x in the denominator, it can be swapped with the right side in one step.

e.g. $\displaystyle \frac{3}{x} = 4 \Rightarrow\frac{3}{4} = x$

(Note how the x and 4 are simply swapped by multiplying both sides by x/4.)

Ratios

Something I did not learn in regular school classes is that if a/b = c/d, then both expressions are also equal to (a+c)/(b + d). This manipulation is known as addendo, and there are several other terms when working with ratios. I found the following terminology mostly in [1].

• If I write the ratio a:b or a/b or$\frac{a}{b}$, the a is known as the antecedent, and b (non-zero) the consequent.
• The slash is called a solidus and the horizontal fraction bar the vinculum.
• The duplicate ratio of a:b is$a^2:b^2$ and similarly the triplicate ratio is$a^3:b^3$.
• The sub-duplicate and sub-triplicate ratios are given by $a^{1/2}:b^{1/2}$ and $a^{1/3}:b^{1/3}$ respectively.
• A ratio is commensurate if it is rational.
• A continued ratio joins 3 or more quantities, e.g. 1:2:3.
• An equality of two ratios is called proportion$a:b = c:d \Rightarrow ad = bc$. (This is called the cross product rule.) The a and d are called extremes and the b and c are called means (hence product of extremes equals product of means)
• If a:b = b:c, b is called the mean proportional between a and c, and$b = \pm\sqrt{ac}$. In this expression, a is called the first proportional and c the third proportional.

Invertendo: If a:b = c:d then b:a = d:c.

Alternendo: If a:b = c:d then a:c = b:d.

Componendo: If a:b = c:d then (a+b):b = (c+d):d.

Dividendo: If a:b = c:d then (a-b):b = (c-d):d.

Componendo and dividendo: If a:b = c:d then (a+b):(a-b) = (c+d):(c-d).

Addendo: If a:b = c:d = e:f = … then each of these is equal to (a+c+e + …):(b+d+f+ …).

Subtrahendo: If a:b = c:d = e:f = … then each of these is equal to (a-c-e-…):(b-d-f-…).

Crescendo: An increasing sequence of ratios.

Innuendo: A subtle hint of a ratio.

Nintendo: A ratio you like playing with.

The last three were jokes by the way. 🙂

I wonder how many of the above terms I will use in future!

11. Back to addendo, more generally the ratios can be weighted, so if$a_1:b_1 = a_2:b_2 = \ldots = a_n:b_n$ then each of these is equal to

$\displaystyle \frac{\sum_{i=1}^n \lambda_i a_i}{\sum_{i=1}^n \lambda_i b_i}$.

A nice application of addendo/subtrahendo is in the following proof of Ceva’s theorem: let ABC be a triangle with P a point inside. Extending AP, BP and CP to meet the triangle’s sides at D, E and F as in the figure, we have the relationship

$\displaystyle \frac{AF}{FB}.\frac{BD}{DC}.\frac{CE}{EA} = 1.$

Proof: denoting the area of triangle XYZ as |XYZ|, we write the ratio of sides in terms of a ratio of areas of two sets of triangles, then apply subtrahendo:

$\displaystyle \frac{AF}{FB} = \frac{|ACF|}{|BCF|} = \frac{|APF|}{|BPF|} = \frac{|ACF|-|APF|}{|BCF|-|BPF|} = \frac{|ACP|}{|BCP|}.$

Similarly,$\displaystyle \frac{BD}{DC} = \frac{|ABP|}{|ACP|}$ and $\displaystyle \frac{CE}{EA} = \frac{|BCP|}{|ABP|}$. Multiplying these three ratios together gives the desired result.

A second application is in solving this question I found recently on math.stackexchange.com:

If$\displaystyle M(x_2,y_2)$ is the foot of a perpendicular drawn from$\displaystyle P(x_1,y_1)$ on the line$ax + by + c = 0$ (a and b non-zero), then we have

$\displaystyle \frac{x_2 - x_1}{a} = \frac{y_2 - y_1}{b} = \frac{-\left(ax_1 + by_1 + c\right)}{a^2 + b^2}.$

The first two equality follows from MP being perpendicular to the line having slope -b/a. To obtain the second equality we use addendo:

$\displaystyle \frac{x_2 - x_1}{a} = \frac{a(x_2 - x_1)}{a^2} = \frac{b(y_2 - y_1)}{b^2} = \frac{a(x_2 - x_1) + b(y_2 - y_1)}{a^2+b^2} = \frac{-\left(ax_1 + by_1 + c\right)}{a^2 + b^2},$

where the last equality follows from the fact that $\displaystyle M(x_2,y_2)$ is on the line and hence satisfies $ax_2 + by_2 + c = 0$. Nice!

Linear Fractional Transformations

12. If you have an equation like $\displaystyle y = \frac{1-x}{1+x}$ and you want to write x in terms of y, you can perform a series of manipulations to get there in maybe four or five steps, or use the following fact:

The inverse of $\displaystyle y = \frac{ax + b}{cx +d}$ is $\displaystyle x = \frac{dy - b}{-cy + a}$.

This is easy to remember if you know that the inverse of the 2 by 2 matrix $\displaystyle \left[\begin{array}{cc}a&b\\c&d\end{array}\right].$ is a multiple of $\displaystyle \left[\begin{array}{cc}d&-b\\-c&a\end{array}\right]$ (swap the diagonals, change the sign of the off-diagonals). Functions of the above form are known as fractional linear transformations, and are very interesting objects to work with. There is in fact a strong connection between the function $\frac{ax + b}{cx +d}$ and the matrix $\displaystyle \left[\begin{array}{cc}a&b\\c&d\end{array}\right].$ It can be verified by direct computation that the composition of two fractional linear transformations corresponds to the product of the corresponding matrices.

Another way to see why this correspondence works is as follows (refer to [2, pp27-8] for more information). The real projective line $\displaystyle \mathbb{RP}^1$ can be viewed as a pair of real numbers $\left(\begin{array}{c}x_1\\x_2\end{array}\right)$, where $\left(\begin{array}{c}x_1\\x_2\end{array}\right)$ and $\left(\begin{array}{c}kx_1\\kx_2\end{array}\right)$ are equivalent. An element of $\displaystyle \mathbb{RP}^1$ can be a viewed as a 1-dimensional subspace in $\mathbb{R}^2$ with the origin removed. There is a bijective correspondence between $\displaystyle \mathbb{RP}^1$ and the real line plus infinity, given by

$\displaystyle \left(\begin{array}{c}x_1\\x_2\end{array}\right) \leftrightarrow \frac{x_1}{x_2}$ if $x_2 \neq 0, \quad \left(\begin{array}{c}x_1\\ 0 \end{array} \right) \leftrightarrow \infty$.

Any invertible linear transformation of the real plane $\mathbb{R}^2$ maps any line to a line. In particular it induces a (bijective) map from elements of $\displaystyle \mathbb{RP}^1$ to elements of $\displaystyle \mathbb{RP}^1$ (which we recall are viewed as 1-dimensional subspaces). Hence any invertible linear transformation of the real plane corresponds to a bijection from the real line plus infinity to itself.

If M is such a transformation, it can be represented by left multiplication by a 2 by 2 matrix $\displaystyle \left[\begin{array}{cc}a&b\\c&d\end{array}\right]$. Let $\phi$ be the bijection of the extended real line induced by M. If $x \in \mathbb{R}$, then M sends  $\displaystyle \left[\begin{array}{c}x\\1 \end{array}\right]$ to $\displaystyle \left[\begin{array}{c}ax+b\\cx+d \end{array}\right]$, which implies $\phi(x) = \frac{ax+b}{cx+d}$.

Hence the linear fractional transformations have a group structure and under compositions may be represented by matrix multiplications. This is why their inverse can be determined easily.

In particular, if a = -d (i.e. coefficient of x in numerator and constant coefficient in denominator sum to zero), then

$\displaystyle f(x) = \frac{ax+b}{cx-a}$ is its own inverse.

For example, $\displaystyle y = \frac{1-x}{1+x} \Rightarrow x = \frac{1-y}{1+y}$.

Partial Fractions

This is the about the techniques to reduce a ratio of polynomials into a sum of simpler ratios. It comes up most often in integration and complex analysis. If the degree of the numerator is at least that of the denominator we can always use division to have the remaining numerator of degree less than that of the denominator. We assume this to be the case from now.

The theory is large, so here I will mention only the most common occurrences:

(A) Linear factors in the denominator: $\displaystyle \frac{f(x)}{\prod_{i=1}^n(x-a_1)^{\alpha_i}} = \sum_{i=1}^n \sum_{j = 1}^{\alpha_i} \frac{A_{ij}}{(x-a_i)^j}$.

The question: how do we find $\displaystyle A_{ij}$?

(B) The denominator has quadratic factors which cannot be factored over the reals: $\displaystyle \frac{f(x)}{\prod_{i=1}^m(x-a_i)^{\alpha_i}\prod_{i=1}^n(x^2-b_ix +c_i)^{\beta_i}} = \sum_{i=1}^m \sum_{j = 1}^{\alpha_i} \frac{A_{ij}}{(x-a_i)^j} + \sum_{i=1}^m \sum_{j = 1}^{\beta_i} \frac{B_{ij}x + C_{ij}}{(x^2-b_ix +c_i)^j}$.

13. For (A), the easiest case is when all the $\alpha_i$ values are 1. We simply multiply both sides by $(x-a_i)$ and then set $x=a_i$ to isolate $A_{i1}$.

e.g. $\begin{array}{lcl} \frac{1}{x(x-1)(x-2)} &=& \frac{1/(-1.-2)}{x} + \frac{1/(1.-1)}{x-1} + \frac{1/(2.1)}{x-2}\\ &=& \frac{1}{2x} - \frac{1}{x-1} + \frac{1}{2(x-2)}\end{array}$

(This can be done straight away, more or less.)

14. If one of the factors on the denominator has $\alpha_i > 1$, I work out the other numerators first, then bring them to the other side to find the final coefficient.

For example, to split $\frac{1}{x(x-1)^2}$ we can initially write

$\displaystyle \frac{1}{x(x-1)^2} = \frac{1/(0-1)^2}{x} + \frac{1/1}{(x-1)^2} + \frac{A}{x-1}.$

This becomes

$\displaystyle \frac{A}{x-1} = \frac{1 - (x-1)^2 - x}{x(x-1)^2} = \frac{(x-1)(1-x-1)}{x(x-1)^2} = \frac{-1}{x-1},$

from which A = -1 and we conclude

$\displaystyle \frac{1}{x(x-1)^2} = \frac{1}{x} - \frac{1}{x-1}+ \frac{1}{(x-1)^2}$.

15. In the more general case of (A), we multiply both sides by $(x-a_i)^j$, move terms with powers of $(x-a_i)$ in the denominator to the left side, and then take the limit of both sides as $x \rightarrow a_i$. This limit will require $\alpha_i-i$ derivatives (applications of l’Hôpital’s rule) as both numerator and denominator of the left side have the root $a_i$ with multiplicity $\alpha_i-i$. The result is the following form.

$\displaystyle A_{ij} = \frac{1}{(\alpha_i-i)!} \frac{d^{\alpha_i-i}}{dx^{\alpha_i-i}} \frac{f(x)(x-a_i)^{\alpha_i}}{\prod_{k=1}^n(x-a_1)^{\alpha_i}}$.

16. Finally, to deal with case (B), one could either use complex numbers as roots of the quadratics and proceed as in the linear case, or multiply through by the common denominator, and equate the coefficients of like terms. This forms a system of linear equations which can be solved (this general method is probably how computer packages tackle the problem).

17. Once again, if there is only one “nuisance factor”, it is easiest to take other terms to the left.

e.g. to split $\frac{1}{x^3 + 1}$, we first write

$\displaystyle \frac{1}{x^3 + 1} = \frac{1/((-1)^2 - (-1) + 1)}{x+1} + \frac{Ax + B}{x^2 - x + 1}.$

Then $\displaystyle \frac{Ax + B}{x^2 - x + 1} = \frac{3 - (x^2 - x + 1)}{3(x+1)(x^2 - x+ 1)} = \frac{(x+1)(2-x)}{3(x+1)(x^2 - x+ 1)} = \frac{2-x}{3(x^2 - x+ 1)}$. Hence A = -1/3, B = 2/3, and

$\displaystyle \frac{1}{x^3 + 1} = \frac{1}{3(x+1)} + \frac{2-x}{3(x^2 - x+ 1)}$.

#### References

[1] N.V. Ravi, Ratio and Proportion, http://www.icai.org/resource_file/16808Ratio-Proportion.pdf

[2] D. Sarason, Complex function theory, AMS (2nd edition), 2007.

## April 19, 2011

### Some counterintuitive distances on the globe

Filed under: geography — ckrao @ 11:09 am

Further to my post on long flights, here are some more counter-intuitive facts about distances on the globe, that I hope to add to over time.

• Africa is closer to Canada than to the US.
• Oslo-Seattle (7352km) is less than Paris-Miami (7385km)!
• Moscow-Beijing (5807km) is less than Melbourne-Singapore (6057km).
• Indonesia is big – some 5300km from the southern Papua New Guinea border to the westernmost point of Sumatra. This is more than the distance between London and Afghanistan!
• Tokyo is closer to Port Moresby, PNG (5058km) than Kuala Lumpur (5233km)!

More here:

http://mapfrappe.blogspot.com/

## April 17, 2011

### The maximum of two exponential random variables

Filed under: mathematics — ckrao @ 12:21 am

Suppose X and Y are independent exponential random variables with mean 1. It is well known that the minimum of X and Y is also exponential with mean 1/2. A lesser known fact is that the maximum of X and Y has the same distribution as X + Y/2!

To prove this curiosity, we simply show their distributions are equal. Since X and Y have pdf and cdf given by $e^{-x}$ and $1-e^{-x}$ respectively for $x \geq 0$, we have

$\begin{array}{lcl} {\rm Pr}(X + Y/2 \leq a) &=& \int_0^a e^{-x} {\rm Pr}\left(Y \leq 2(a-x)\right)\ dx\\&=&\int_0^a e^{-x} \left(1-e^{-2(a-x)}\right)\ dx\\&=& \int_0^a e^{-x}\ dx - e^{-2a}\int_0^a e^x\ dx\\& = & 1 - e^{-a} - e^{-2a}(e^a-1)\\ &=& 1 - 2e^{-a} + e^{-2a}\\&=& \left(1-e^{-a}\right)^2\\ &=& {\rm Pr}(X \leq a, Y \leq a)\\&=& {\rm Pr}(\max\{X , Y\} \leq a),\end{array}$

as required.

More generally, if X and Y have respective means $1/\lambda_1$ and$1/\lambda_2$, then max{X,Y} has cdf given by $(1-e^{-\lambda_1})(1-e^{-\lambda_2})$, the product of the respective cdfs of X and Y. However it is only for the special case of $\lambda_1 = \lambda_2 = 1$ that this may be expressed as the cdf of a linear combination of X and Y.

Reference: G. Grimmett and D. Stirzaker, Probability and Random Processes, Oxford University Press, 2001, p142.

## April 9, 2011

### Some memories from the 2011 Cricket World Cup

Filed under: cricket,sport — ckrao @ 7:57 am

Now that the ICC World Cup has come to an end, here are some of my favourite moments, performances or matches.

• Sehwag starting his first five innings with a first-ball four each time! After his 175 in the first game, I thought someone would score a 200 in the tournament but it wasn’t to be.
• It was wonderful to see how Bangladesh really embraced the tournament as first-time hosts (aside from one incident following their big loss to the West Indies).
• Afridi starting the tournament with 5/16 (vs Kenya), 4/34 (vs Pakistan) and 5/23 (vs Canada). He later took 4/30 against the West Indies. Previously no bowler had taken 4 wickets more than twice in a single World Cup.
• The batting of two of my favourites in Tendulkar and Sangakkara – a joy to watch.
• South Africa’s comeback against India, reducing them from 1/267 to 296 all out in 55 balls! Steyn took 5 wickets in 16 balls. It was to be India’s only loss in the tournament.
• The amazing innings of Ross Taylor vs Pakistan (131* off 124 after being 69 off 108, 92 by NZ in the last four overs!) and Kevin O’Brien vs England (113 off 63 in a successful chase of 327 after being 5/111 in the 25th over)
• All of England’s matches were amazing, losing to Bangladesh and Ireland, but beating South Africa in a low scorer, tying with India in a high scorer (338 each, including 158 by captain Strauss) and then getting out of jail against the West Indies in a must-win situation.
• Malinga’s 6/38 against Kenya, all wickets brilliant yorkers. It included a hat trick too. See video below.
• Yuvraj Singh’s performances in the tournament with bat and ball with four man of the match awards and man of the tournament.
• Ryan ten Doeschate’s two centuries, especially his all-round performance against England. His other century helped Netherlands reach 306 against Ireland in an excellent match between two associate nations. After 33 matches he averages 67 with the bat and 24 with the ball in one day internationals!
• Dilshan and Tharanga’s opening partnerships against Zimbabwe and England of 282 and 231* respectively (the latter in a quarter-final). These were the two highest partnerships in the tournament.
• Zaheer Khan was Mr Consistent, with wicket hauls of 2, 3, 3, 3, 1, 3, 2, 2, 2 in his 9 games.
• Ponting’s 104 in the quarter final and Jayawardene’s 103 in the final (both against India) in losing causes, hugely significant innings for different reasons.
• Jacob Oram seemed to be involved in everything (4 wickets, 2 catches including a great one to dismiss Kallis) in New Zealand’s win against South Africa in the quarter-final. South Africa are 0-5 in World Cup knockout games while New Zealand are 0-6 in World Cup semi-finals.
• The buildup to the India vs Pakistan semi-final. The match itself did not disappoint. India are now 5-0 against Pakistan in World Cup matches despite being 47-69 overall in one-day internationals.
• The innings of Gambhir and Dhoni in the final, under intense pressure. The match itself was played in fine spirits by both India and Sri Lanka. It was the first time the nation hosting the final won the Cup.

Here is a look at Malinga’s yorkers in his 6/38 against Kenya.

ESPN Cricinfo ICC World Cup 2011 page

Cricbuzz ICC World Cup 2011 page

2011 ICC World Cup records

## April 2, 2011

### Some notes on the Schur Complement

Filed under: mathematics — ckrao @ 6:11 am

In matrix theory the Schur complement of a square submatrix A in the bigger square matrix

$\displaystyle M = \left[\begin{array}{cc}A & B\\C&D \end{array}\right]$

is defined as $S = D-CA^{-1}B$. Here we assume A and D are square matrices. Similarly the Schur complement of D in M is ${A - BD^{-1}C}$. The formulas are reasonably easy to recall: go clockwise around the block matrix from the opposite corner, remembering to invert the opposite square submatrix.

Here are some properties and uses of the Schur complement.

1. It comes up in Gaussian elimination in the solution to a system of equations:

To eliminate x in the system of equations Ax + By = u, Cx + Dy = v, we multiply the first equation by ${CA^{-1}}$ and subtract it from the second equation to obtain $(D-CA^{-1}B)y = Sy = v - CA^{-1}u$.

2. The Schur determinant formula:

$\displaystyle \det \left[\begin{array}{cc}A & B\\C&D \end{array}\right] = \det A \det S$

This is proved by taking determinants of both sides of the matrix factorisation

$\displaystyle \left[\begin{array}{cc}A & B\\C&D \end{array}\right] = \left[\begin{array}{cc}I & 0\\CA^{-1}&I \end{array}\right]\left[\begin{array}{cc}A & B\\ 0 &D-CA^{-1}B \end{array}\right].$

The formula tells us that a large matrix is invertible if and only if any top left or bottom right submatrix and its Schur complement are invertible.

3. The Guttman rank additivity formula [1, p14]:

$\displaystyle {\rm rank} \left[\begin{array}{cc}A & B\\C&D \end{array}\right] = {\rm rank} A + {\rm rank} S$

4. It arises in matrix block inversion: in particular the 2,2 element of the inverse of the block matrix is the inverse of the Schur complement of A.

5. The addition theorem for the Schur complement of Hermitian matrices ([1, p28]):

Firstly we define the inertia of a matrix. Let ${\rm In}(M)\in \mathbb{R}$ be the triple (p,q,z) of integers representing the number of positive, negative and zero eigenvalues of the Hermitian matrix M (${M = M^*}$) respectively. Then

$\displaystyle {\rm In}(M) = {\rm In}(A) + {\rm In}(S)$.

6. It comes up in the minimisation of quadratic forms [3, App A5.5]:

If A is positive definite, the quadratic form $u^T Au + 2v^TB^Tu + v^T Cv$ in the variable u (v is a constant vector) has minimum value of ${v^T S v}$, achieved when $u = -A^{-1}Bv$. This follows from the completion of squares:

$\displaystyle u^T Au + 2v^TB^Tu + v^T Cv = ( u + A^{-1}Bv)^T A(u + A^{-1}Bv) + v^T S v.$

7. A least squares application of 6 is the following:

Let x and y be two random vectors. The covariance of the error in linearly estimating x from y via K (in order to minimise the expectation $E\|(x-Ky)\|^2$, assuming the covariance $R_y$ of y is positive definite, is the Schur complement of $R_y$ in the joint covariance matrix

$\Sigma = E \left[ \begin{array}{c} x\\y \end{array}\right] \left[\begin{array}{cc}x & y \end{array}\right] = \left[\begin{array}{cc}R_x & R_{xy}\\ R_{yx}& R_y \end{array}\right]$

Similarly the covariance of the error in estimating y from x is the Schur complement of $R_x$ in $\Sigma$:

$\displaystyle {\rm var}[y | x] = R_y - R_{yx} R_x^{-1}R_{xy}$.

8. The above leads to a matrix version of the Cauchy Schwartz Inequality [4]:

Since any covariance matrix is non-negative definite we have $R_y - R_{yx} R_x^{-1}R_{xy} \succeq 0$. With the inner product of random vectors in $\mathbb{R}^n$ defined as $\langle x, y \rangle := E xy^T$ this becomes

$\displaystyle \|y\|^2 - \langle y,x \rangle \|x\|^{-2} \langle x, y \rangle \succeq 0.$

#### References

[1] F. Zhang, “The Schur Complement and its applications”, Springer 2005.

[2] “Block matrix decompositions” in https://ccrma.stanford.edu/~jos/lattice/Block_matrix_decompositions.html

[3] Boyd and Vandenburghe, “Convex Optimisation”, Cambridge University Press, 2004.

[4] Kailath, Sayed and Hassibi, “Linear Estimation”, Prentice Hall, 2000.

[5] Schur complement, Wikipedia

Blog at WordPress.com.