# Chaitanya's Random Pages

## December 12, 2012

### Matrices that are easy to invert

Filed under: mathematics — ckrao @ 12:46 pm

Here are some classes of real non-singular matrices that are easier to invert than most. In an earlier post I had referred to involutory matrices that had the special property that they were equal to their inverse – hence they are the easiest types of matrix to invert!

• 1 by 1 and 2 by 2 matrices are easy to invert:
$\displaystyle \left[ \begin{array}{cc} a & b \\ c & d \end{array} \right]^{-1} = \frac{1}{ad - bc} \left[ \begin{array}{cc} d & -b \\ -c & a \end{array} \right]$
• Diagonal matrices (including the identity matrix) are among the easiest to invert:
$\displaystyle \left[ \begin{array}{cccc} d_1 & & & \\ & d_2 & & \\ & & \ddots & \\ & & & d_n \end{array} \right]^{-1} = \left[ \begin{array}{cccc} 1/d_1 & & & \\ & 1/d_2 & & \\ & & \ddots & \\ & & & 1/d_n \end{array} \right]$
• Orthogonal matrices $P$ satisfy $P^{-1} = P^T$. Special cases are permutation matrices and rotation matrices.
$\displaystyle \left[ \begin{array}{cccc} 0 & 0 & 1 & 0 \\ 1 & 0 & 0 &0 \\ 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 \end{array} \right]^{-1} = \left[ \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{array} \right], \quad \left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & \cos \theta & -\sin \theta \\ 0 & \sin \theta & \cos \theta \end{array} \right]^{-1} = \left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & \cos \theta & \sin \theta \\ 0 & -\sin \theta & \cos \theta \end{array} \right]$
• A matrix with all 1s down its diagonal and non-zero entries in a single row or column are easily invertible (atomic triangular matrices are a special case):
$\displaystyle \left[ \begin{array}{ccccc} 1 & a_1 & & & \\ & 1 & & & \\ & a_2 &1 & & \\ & \vdots & & \ddots & \\ & a_{n-1} & & & 1 \end{array} \right]^{-1} = \left[ \begin{array}{ccccc} 1 & -a_1 & & & \\ & 1 & & & \\ & -a_2 &1 & & \\ & \vdots & & \ddots & \\ & -a_{n-1} & & & 1 \end{array} \right]$
Other special cases of this are elementary matrices corresponding to the addition of a row to another:
$\displaystyle \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 &0 \\ m & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right]^{-1} = \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ -m & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right]$
• Bidiagonal matrices with 1s down the main diagonal are inverted as follows:
$\displaystyle \left[ \begin{array}{cccc} 1 & -a_3 & 0 & 0 \\ 0 & 1 & -a_2 & 0 \\ 0 & 0 & 1 & -a_1 \\ 0 & 0 & 0 & 1 \end{array} \right]^{-1} =\left[ \begin{array}{cccc} 1 & a_3 & a_3 a_2 & a_3 a_2 a_1 \\ 0 & 1 & a_2 & a_2 a_1 \\ 0 & 0 & 1 & a_1 \\ 0 & 0 & 0 & 1 \end{array} \right]$
A special case of this is that the sum matrix and difference matrix are inverses:
$\displaystyle \left[ \begin{array}{cccc} 1 & -1 & 0 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 1 & -1 \\ 0 & 0 & 0 & 1 \end{array} \right]^{-1} =\left[ \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 \end{array} \right]$
Another special case is the following alternating matrix:
$\displaystyle \left[ \begin{array}{cccc} 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 \end{array} \right]^{-1} =\left[ \begin{array}{cccc} 1 & -1 & 1 & -1 \\ 0 & 1 & -1 & 1 \\ 0 & 0 & 1 & -1 \\ 0 & 0 & 0 & 1 \end{array} \right]$
If one thinks about matrix inversion in terms of Gauss-Jordan elimination, keeping track of the order in which row/column operations can be done allow us to carry out matrix inversions such as the following:
$\displaystyle \left[ \begin{array}{ccccc} 1 & 0 & 0 & 0 & 4 \\ 0 & 1 & 1 & 0 & 3\\ 0 & 0 & 1 & 2 & 0\\ 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 1 \end{array} \right]^{-1} =\left[ \begin{array}{ccccc} 1 & 0 & 0 & 0 & -4 \\ 0 & 1 & 1 & 0 & -3\\ 0 & 0 & 1 & -2 & 0\\ 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 1\end{array} \right], \quad \left[ \begin{array}{ccc} 1 & 0 & -2 \\ 0 & 1 & 0 \\ 0 & -3 & 1 \end{array} \right]^{-1} =\left[ \begin{array}{cccc} 1 & 6 & 2 \\ 0 & 1 & 0 \\ 0 & 3 & 1 \end{array} \right]$
• Integer-valued matrices that are equivalent (up to row or column operations which don’t change the determinant) to a triangle matrix with 1 or -1 as diagonal entries have inverses that are integer-valued. See [1] for more details.
• Matrices that are a sum of the identity matrix and a matrix with all entries equal to a constant are inverted as follows.
If $\displaystyle u = \left( \begin{array}{cccc} 1 & 1& \ldots & 1 \end{array} \right)$, then $\displaystyle \left( I + k u^T u \right)^{-1} = I - \frac{k}{1 + nk} u^T u$This is a special case of the formula $\displaystyle (I + AB)^{-1} = I - A(I + BA)^{-1} B$. For column vectors $x$ and $y$ this becomes $\displaystyle (I + xy^T)^{-1} = I - \frac{x y^T}{1 + x^T y}$.
• The inverse of the second difference matrix is
\displaystyle \begin{aligned} \left[ \begin{array}{ccccc} 1 & -1 & 0 & \cdots & \\ -1 & 2 & -1 & \ddots & \\ 0 & -1 & 2 & -1 & \\ & \ddots & & \ddots & \\ & & 0 & -1 & 2 \end{array} \right]^{-1} &= \left[ \begin{array}{ccccc} n & n-1 & n-2 & \cdots & 1 \\ n-1 & n-1 & n-2 & \cdots & 1 \\ n-2 & n-2 & n-2 & \cdots & 1\\ \vdots & \vdots & & \ddots & \vdots \\ 1 & 1 & \cdots & & 1 \end{array} \right]\\ &= \left[ \begin{array}{ccccc} 1 & 1 & 1 & \cdots & 1 \\ 0 & 1 & 1 & \cdots & 1 \\ 0 & 0 & 1 & \cdots & 1\\ \vdots & \vdots & & \ddots & \vdots \\ 0 & 0 & \cdots & 0 & 1 \end{array} \right] \left[ \begin{array}{ccccc} 1 & 0 & 0 & \cdots & 0 \\ 1 & 1 & 0 & \cdots & 0 \\ 1 & 1 & 1 & & 0\\ \vdots & \vdots & & \ddots & \\ 1 & 1 & \cdots & 1 & 1 \end{array} \right].\end{aligned}
• Using identities such as $(AB)^{-1} = B^{-1} A^{-1}$, $(A^T)^{-1} = (A^{-1})^T$, $(aA)^{-1} = a^{-1} A^{-1}$ and $(I + A)^{-1} = A^{-1} (A^{-1} + I)^{-1}$, where $A$ and $B$ are easily invertible, leads to simplifications. For example, if $B$ is diagonal, $AB$ multiplies the columns of $A$ by the entries of $B$ and so $(AB)^{-1}$ will multiply the rows of $A^{-1}$ by the entries of $B^{-1}$.
• The identity $\displaystyle (A \otimes B)^{-1} = A^{-1} \otimes B^{-1}$ (where $\otimes$ is the Kronecker product) leads to identities such as
$\displaystyle \left[ \begin{array}{cc} aI & bI \\ cI & dI\end{array} \right]^{-1} = \frac{1}{ad - bc} \left[ \begin{array}{cc} dI & -bI \\ -cI & aI\end{array} \right].$
• The following block matrix results also may be useful [2].
$\displaystyle \left[ \begin{array}{cc} I & B \\ 0 & I\end{array} \right]^{-1} = \left[ \begin{array}{cc} I & -B \\ 0 & I\end{array} \right],\\ \left[ \begin{array}{cc} A & 0 \\ 0 & D\end{array} \right]^{-1} = \left[ \begin{array}{cc} A^{-1} & 0 \\ 0 & D^{-1}\end{array} \right], \\ \left[ \begin{array}{cc} A & B \\ 0 & D\end{array} \right]^{-1} = \left[ \begin{array}{cc} A^{-1} & -A^{-1}BD^{-1} \\ 0 & D^{-1}\end{array} \right],\\ \left[ \begin{array}{ccc} I & A & B \\ 0 & I & C\\ 0 & 0 & I \end{array} \right]^{-1} = \left[ \begin{array}{ccc} I & -A & AC-B \\ 0 & I & -C\\ 0 & 0 & I \end{array} \right]$

#### References

[1] R. Hanson, Integer Matrices Whose Inverse Contain Only Integers, The Two-Year College Mathematics Journal, Vol. 13, No. 1 (Jan., 1982), pp. 18-21.

[2] D. Bernstein, Matrix Mathematics: Theory, Facts, and Formulas, Princeton University Press, 2011.