Chaitanya's Random Pages

January 3, 2011

Solving linear recurrence relations with constant coefficients

Filed under: mathematics — ckrao @ 11:16 am

Here is a method (algorithm) to solve recurrence relations of the following form, without the use of generating functions or z-transforms.

\displaystyle x_{n+k} + c_{k-1}x_{n+k-1} + c_{k-2}x_{n+k-2} + \ldots + c_0 x_n = f(n), \medskip n = 0, 1, \ldots \quad (*)

We assume we are given k initial conditions (to enable us to solve uniquely the k’th order equation), and that f(n) is of the form

\displaystyle f(n) = \sum_{j=1}^J \alpha_j^n q_j(n), where q_j(n) is a polynomial of degree d_j.

1. Solve the equation with f(n) = 0 first (the homogeneous form of the equation).

a) Write down, then factorise the characteristic equation (which arises from setting x_k = \lambda^k).

\displaystyle \lambda^k + c_{k-1}\lambda^{k-1} + \ldots + c_1 \lambda + c_0 = (\lambda - r_1)^{s_1} (\lambda - r_2)^{s_2} \ldots (\lambda - r_m)^{s_m},

where r_1, r_2, \ldots, r_m are distinct.

b) The homogeneous solution (i.e. that corresponding to f(n) = 0) is

\displaystyle x_n^{(h)} = r_1^n p_1(n) + r_2^np_2(n) + \ldots + r_m^np_m(n),

where p_i(n) is a polynomial of degree s_i-1.

2. Incorporate the term f(n). For j = 1 to J,

  • If \alpha_j \neq r_i for all i, x_n^{(p,j)} = \alpha_j^n\left(t_{j,0} + t_{j,1} n + \ldots + t_{j,d_j} n^{d_j}\right)
  • If \alpha_j = r_i for some i, x_n^{(p,j)} = \alpha_j^n n^{s_i}\left(t_{j,0} + t_{j,1} n + \ldots + t_{j,d_j} n^{d_j}\right)

3. Find t_{j,k} for all j and k by substituting x_n^{(p,j)} into (*).

4. Combine the homogeneous and particular solutions to find the general solution x_n = x_n^{(h)} + \sum_{j=1}^J x_n^{(p,j)}.

5. Use the initial conditions to find p_1(n), p_2(n), \ldots, p_m(n) in step 1b.

Let us try this out with the following example.

Solve x_{n+2} + 2x_{n+1} - 3x_n = 4, \quad x_0 = 6, x_1 = -1.

1a) The characteristic equation is found by substituting x_n = \lambda^n and setting the right side to 0:

\displaystyle \lambda^2 + 2\lambda - 3 = (\lambda + 3)(\lambda - 1)

b) The homogeneous solution is then given by x_n^{(h)} = C_1(1)^n + C_2(-3)^n.

2. We have f(n) = 4 = 1^n .4. Since 1 is one of the roots of the characteristic polynomial of multiplicity 1, we try a particular solution of the form x_n^{(p)} = n^1 t_0 = nt_0.

3. Substituting x_n^{(p)} into the original equation:

\displaystyle (n+1)t_0 + 2nt_0 - 3(n-1)t_0 = 3n+1

Matching the constant coefficient of both sides: t_0 +3t_0 = 4, so t_0 = 1.

4. Combining homogeneous and particular solutions, x_n = x_n^{(h)} + x_n^{(p)} = C_1 + C_2(-3)^n + n.

5. Use the initial values to find C_1, C_2:

  • n = 0:\quad \quad x_0 = C_1 + C_2(-3)^0 + 0 = C_1 + C_2 = 6
  • n = 1:\quad \quad x_1 = C_1 + C_2(-3)^1 + 1 = C_1 - 3C_2 + 1= -1

Solving gives C_1 = 4, C_2 = 2. Hence x_n = 2.(-3)^n + n + 4 and it can be verified by substitution that this indeed satisfies the recurrence equation with the given initial conditions.

So why does this procedure work? The remainder of this post attempts to provide some insight. It is not intended to be a complete proof, but should provide enough examples for one to be able to generalise and complete the details.

Part A. We start by considering the first order equation x_{n+1} - \beta x_n = \sum_{j = 1}^J q_j(n) \alpha_j^n. \quad (+)

1. By setting x_n = y_n \beta^n this can be converted to an equation of the form y_{n+1} - y_n = \frac{1}{\beta}\sum_{j = 1}^J q_j(n) (\alpha_j/\beta)^n, so we are reduced to solving an equation of the original form (+) with \beta = 1.

2. If we can find a solution to x_{n+1} - x_n = q_j(n) \alpha_j^n for each j, then summing the J solutions will give a solution for x_{n+1} - x_n = \sum_{j=1}^J q_j(n) \alpha_j^n. We are reduced further to solving an equation of the form x_{n+1} - x_n = q(n) \alpha^n.

3. If we let x_n = p(n)\alpha^n, x_{n+1} - x_n = q(n) \alpha^n becomes

\displaystyle \alpha^{n+1}p(n+1) - \alpha^n p(n) = q(n)\alpha^n,

which reduces to \alpha p(n+1) - p(n) = q(n). To solve this we have the following lemma which is interesting in its own right.

Lemma 1: Let q(n) be a given polynomial of degree d, and \alpha a constant. Then there exists a polynomial p(n) such that the equation \alpha p(n+1) - p(n) = q(n) is satisfied.

  • If \alpha \neq 1, p has degree d.
  • If \alpha = 1, p has degree d+1.

For example, 2p(n+1) - p(n) = n has the solution p(n) = n-2 (linear), while p(n+1) - p(n) = n has the solution p(n) = \frac{1}{2}n(n-1) (quadratic).

To prove the lemma, we find the coefficients of p(n) by solving a system of linear equations. The invertibility of the equations comes from the fact that the system is lower triangular with non-zero diagonal entries. This means back-substitution can be used to find the coefficients.

Case 1: \alpha \neq 1

Let p(n) = \sum_{k = 0}^d c_k n^k. Then

\begin{array}{lcl} p(n+1) &=& \sum_{k = 0}^d c_k (n+1)^k\\&=&\sum_{k = 0}^d c_k \sum_{l = 0}^k n^l \binom{k}{l}\\& = & \sum_{m=0}^d n^m \left[\sum_{k=m}^d\binom{k}{m}c_k\right]\\&=&\sum_{k=0}^d n^k \left[\sum_{j = k}^d \binom{j}{k} c_j\right].\end{array}

Hence \alpha p(n+1) - p(n) = \sum_{k=0}^d n^k \left[\left(\alpha\sum_{j = k}^d \binom{j}{k} c_j\right) - c_k\right].

If q(n) = q_d n^d + q_{d-1}n^{d-1} + \ldots + q_0, matching coefficients of n^k leads to the following system of equations.

\left[\begin{array}{ccccc}\alpha \binom{d}{d}-1&0&0&\cdots&0\\ \alpha \binom{d}{d-1} &\alpha \binom{d-1}{d-1} - 1 & 0 & \cdots&0\\ \alpha \binom{d}{d-2}&\alpha \binom{d-1}{d-2}&\alpha \binom{d-2}{d-2}-1&0&\vdots \\ \vdots & \vdots & & \ddots & \\ \alpha \binom{d}{0} & \alpha \binom{d-1}{0} & \cdots & \alpha \binom{1}{0} & \alpha \binom{0}{0}-1 \end{array}\right]\left[\begin{array}{c}c_d\\c_{d-1}\\c_{d-2}\\ \vdots \\ c_0\end{array}\right] = \left[\begin{array}{c} q_d\\q_{d-1}\\q_{d-2}\\ \vdots \\ q_0\end{array}\right]

For \alpha \neq 1 the diagonal entries are non-zero and hence the system of equations may be inverted to solve for the coefficients c_k of p(n).

Case 2: \alpha = 1

This time we let p(n) = \sum_{k = 1}^{d+1} c_k n^k (degree d+1) and proceed as in case 1 with \alpha set to 1, leading to the following system.

\left[\begin{array}{ccccc}\binom{d+1}{d}& 0 & 0 & \cdots & 0 \\ \binom{d+1}{d-1} & \binom{d}{d-1} & 0 & \cdots & 0\\ \binom{d+1}{d-2} & \binom{d}{d-2} & \binom{d-1}{d-2}&0&\vdots\\ \vdots & \vdots & & \ddots & 0\\ \binom{d+1}{0}&\binom{d}{0}&\cdots&\binom{2}{0}&\binom{1}{0} \end{array}\right]\left[\begin{array}{c}c_{d+1}\\c_d\\c_{d-1}\\ \vdots \\ c_1\end{array}\right] = \left[\begin{array}{c} q_d\\q_{d-1}\\q_{d-2}\\ \vdots \\ q_0 \end{array}\right]

Again the diagonal entries are non-zero, and hence may be solved by back-substitution starting with c_{d+1}. This completes the proof of Lemma 1. Note that any constant term can be added to p(n) and the result still satisfies \alpha p(n+1) - p(n) = q(n).

To tie this with the original 5 steps at the top of the post, the homogeneous solution to x_{n+1} - \beta x_n = 0 is seen to be x_n^{(h)} = \beta^nx_0 (easily linked to the characteristic equation \lambda - \beta). The two cases in step 2 are equivalent to verifying those of Lemma 1, since we applied the substitution y_n = x_n/\beta^n. Then the homogeneous and particular solutions may be added and initial conditions used to determine any remaining coefficients.

The main conclusion of Part A after following the above steps is as follows:

Proposition 1 : The solution of x_{n+1} - \beta x_n = q(n) \alpha^n (where q(n) has degree d), is of the form A \beta^n + q_1(n)\alpha^n, where q_1(n) is a polynomial whose degree is d if \alpha \neq \beta and d+1 if \alpha = \beta.

Part B: Now we seek to solve the original equation (*). This can be done by successively reducing the order of the left side to reach eventually first order, and then apply the result of Proposition 1.

Let S be the shift operator on sequences, so that S applied to the sequence \{x_0, x_1, x_2, \ldots\} yields \{x_1, x_2, x_3, \ldots\}. This is a linear operator in that S(\alpha x_n + \beta y_n) = \alpha S x_n + \beta S y_n where x_n and y_n are sequences. We can multiply S by scalars, so that for example 2S(x_n) = 2x_{n+1} = S(2x_n), indicating also that S commutes with scalar multiplication. Furthermore, we can also take powers of S, corresponding to successive application of the operator (e.g. S^2 x_n = x_{n+2}). Finally, we can form linear combinations of these powers, hence polynomials in S. Let I denote the identity operator. Then equation (*)  can be written as

\displaystyle (S^k + c_{k-1}S^{k-1} + c_{k-2}S^{k-2} + \ldots + c_0 I)x_n = f(n).

The polynomial in S can factorise, as the algebra of polynomials carries over to the shift operator (i.e. the shift operator generates a commutative ring of linear operators), so we may write something like

x_{n+2} - 5x_{n+1} + 6x_n = (S^2 - 5S + 6I)x_n = (S-2I)(S-3I)x_n = (S-2I)(x_{n+1} - 3x_n) = (S-3I)(S-2I)x_n = (S-3I)(x_{n+1} - 2x_n).

We can start to see how the roots of the characteristic polynomial in 1a) above play a role.

Suppose we wish to find the general solution to the second order equation x_{n+2} - 5x_{n+1} + 6x_n = 0. Letting y_n = x_{n+1} - 3x_n, we find from the above factorisation that (S-2I)y_n = 0. The solution to this is y_n = 2^n y_0. Then x_{n+1} - 3x_n = 2^n y_0, and from Proposition 1, this has solution x_n = 2^nC_1 + 3^n C_2.

Consider now the equation x_{n+2} - 4x_{n+1} + 4x_n = 0. This is equivalent to (S-2)(S-2)x_n = 0. Letting y_n = x_{n+1} - 2x_n, we find that(S-2I)y_n = 0. The solution to this is y_n = 2^n y_0. Then x_{n+1} - 2x_n = 2^n y_0 and from Proposition 1, this has solution x_n = 2^n (C_1 + C_2 n) (i.e. a different form when there are repeated roots).

This can be generalised to higher orders and the case f(n) \neq 0. We factorise the characteristic polynomial (S - r_1I)^{s_1} (S- r_2I)^{s_2} \ldots (S- r_mI)^{s_m}x_n = f(n) and make first order substitutions with an aim to reduce the order of the recurrence by one each time. Each first order equation is of the form given in Part A and hence may easily be solved by applying Proposition 1.

We will give a final example to illustrate this.

Solve (S-3I)(S-3I) (S-2I) x_n = 2^n n.

Let y_n = (S-2I)x_n, z_n = (S-3I)y_n. Then (S-3I)z_n = 2^n n, which has solution of the form z_n = 2^n (An + B) + 3^n C. Then we have

(S-3I)y_n = z_n = 2^n (An + B) + 3^nC.

This has solution of the form y_n = 2^n (A_1n + B_1) + 3^n(C_1n + D_1). Finally, (S-2I)x_n = 2^n (A_1n + B_1) + 3^n(C_1n + D_1), which by Proposition 1 has solution of the form

x_n = 2^n (A_2n^2 + B_2n + C_2) + 3^n(D_2n + E_2).

Note that C_2, D_2 and E_2 are part of the homogeneous solution meaning that they are found only from the initial values and they contribute nothing to the right side of the equation upon substitution. The coefficients A_2 and B_2 are found by substituting x_n = (A_2n^2 + B_2n)2^n into the original equation and comparing coefficients.

This entire explanation (i.e. Parts A and B) may make better sense after trying more examples. It is nice to have a procedure as outlined in steps 1-5, but more rewarding to see why it works.


Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at

%d bloggers like this: