Chaitanya's Random Pages

December 7, 2010

Why circles map to ellipses under linear maps

Filed under: mathematics — ckrao @ 11:16 am

Here is an interesting fact worth pondering. Take a circle and stretch it along some direction. It becomes an ellipse. Now take the ellipse and stretch it again, this time in a different direction. Continue stretching or shrinking in other directions if you like. No matter what directions are used, the final image is an ellipse, having two perpendicular axes (ignoring the degenerate case of a line segment). The direction of these axes depend on the direction and amount of stretching, but the existence of perpendicular axes always remains. This did not seem immediately obvious to me – surely one can be clever about the way one stretches and remove the perpendicularity of the axes!

A linear transformation of a circle is a sequence of stretches along axes, and such a sequence can always be reduced to at most two perpendicular stretches. That is, circles map to ellipses under linear maps.

One way of seeing why this is the case is that the equation of an ellipse is a quadratic form, and applying linear transformations does not change its degree (i.e. a conic remains a conic). Since the only bounded conic sections are ellipses, the result follows. However I would be more satisfied with a coordinate-free proof. Here is one argument that I read recently and outline below. (See more via Trefethen and Bau, Planetmath, and also the nice explanation due to Aubrey Poor here)

Let T be a linear transformation (map) from n-dimensional to m-dimensional space. T is usually represented by an m by n matrix, but we will not need that here. We will assume complex vector spaces for generality, but a 2-dimensional real vector space is easier to visualise. We wish to show that a unit sphere maps to an ellipsoid, by which we mean there exist perpendicular axes of the sphere that each map to perpendicular axes in the m dimensional space.

Consider the image of the unit sphere S=\{x: ||x|| = 1\} under T. There exists a vector v_1 \in S such that ||Tv_1|| is maximal (by the extreme value theorem). Let Tv_1 = \sigma_1 u_1 where u_1 and v_1 are unit vectors and \sigma_1 > 0 (we ignore the degenerate case \sigma_1 = 0). We wish to show that any vector v_2 perpendicular to v_1 maps to a vector perpendicular to u_1. Suppose Tv_2 = \alpha u_1 + \beta u_2 where u_2 \perp u_1 (such an orthogonal decomposition is possible). We identify a vector that maps to something as least as long as \sigma_1 u_1. Consider the unit vector

\displaystyle v = \frac{\sigma_1 v_1 + \alpha^* v_2}{\left(\sigma_1^2 + |\alpha|^2\right)^{1/2}}.


\displaystyle T v = T\left(\frac{\sigma_1 v_1 + \alpha^* v_2}{\left(\sigma_1^2 + |\alpha|^2\right)^{1/2}}\right) = \frac{\sigma_1^2 u_1 + |\alpha|^2 u_1 + \alpha^* \beta u_2}{\left(\sigma_1^2 + |\alpha|^2\right)^{1/2}}.

The length of this vector is at least

\displaystyle \frac{\sigma_1^2 + |\alpha^2|}{\left(\sigma_1^2 + |\alpha|^2\right)^{1/2}} = \left(\sigma_1^2 + |\alpha|^2\right)^{1/2} \geq \sigma_1,

where equality holds when \alpha = 0. In other words, to prevent Tv from having length greater than \sigma_1, we require \alpha = 0. This means Tv_2 = \beta u_2 \perp u_1 as we wished to show.

By an inductive argument we may then show that there exist orthonormal vectors v_1, v_2, \ldots v_r in the n-dimensional space and orthonormal vectors u_1, u_2, \ldots u_r in the m-dimensional space such that

Tv_i = \sigma_i u_i.

In other words there exist orthonormal bases in the row space and column space of T which map to each other. This is a restatement of the singular value decomposition (SVD), that spheres map to ellipsoids (generalised to any dimension). The \sigma_i are known as singular values and are interpreted as the semi-axes lengths of the ellipsoidal image of the unit sphere. Stacking the v’s and u’s into matrices (and adding orthonormal vectors from nullspaces if necessary) gives us the alternate form

\displaystyle AV =U \Sigma, \quad \text{or } \ A = U \Sigma V^*,

where A is the matrix representation of the linear transformation T, and \Sigma has the singular values down its diagonal and 0 entries elsewhere.

The SVD (which applies to any matrix) is the generalisation of the spectral theorem (which applies to normal matrices) and has wide applications, from solving least squares problems (via the pseudo-inverse) to finding low-rank approximations to matrices (enabling compression).


Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

Create a free website or blog at

%d bloggers like this: