Articles

8.3: The Partial Fraction Expansion of the Resolvent - Mathematics


Partial Fraction Expansion of the Transfer Function

The Gauss-Jordan method informs us that (R) will be a matrix of rational functions with a common denominator. In keeping with the notation of the previous chapters, we assume the denominator to have the (h) distinct roots, ({lambda_{j} | j = {1, cdots, h}}) with associated multiplicities ({m_{j} | j = {1, cdots, h}})

Now, assembling the partial fraction expansions of each element of (R) we arrive at

[R(s) = sum_{j = 1}^{h} sum_{k=1}^{m_{j}} frac{R_{j,k}}{(s-lambda_{j})^k} onumber]

where, recalling the equation from Cauchy's Theorem, the matrix (R_{j,k}) equals the following:

[R_{j,k} = frac{1}{2pi j} int R(z)(z-lambda_{j})^{k-1} dz onumber]

Example (PageIndex{1})

As we look at this example in the introduction, we find

[egin{array}{ccc} {R_{1,1} = egin{pmatrix} {1}&{0}&{0} {0}&{1}&{0} {0}&{0}&{0} end{pmatrix}}&{R_{1,1} = egin{pmatrix} {0}&{0}&{0} {1}&{0}&{0} {0}&{0}&{0} end{pmatrix}}&{R_{2,1} = egin{pmatrix} {0}&{0}&{0} {0}&{0}&{0} {0}&{0}&{1} end{pmatrix}} end{array} onumber]

One notes immediately that these matrices enjoy some amazing properties. For example

[egin{array}{ccccc} {R_{1,1}^{2} = R_{1,1}}&{R_{1,2}^{2} = R_{1,2}}&{R_{1,1} R_{2,1} = 0}&{and}&{R_{2,1}^{2} = R_{2,1}} end{array} onumber]

Below we will now show that this is no accident. As a consequence of Equation and the first resolvent identity, we shall find that these results are true in general.

(R_{j,1}^{2} = R_{j,1}) as seen above.

Recall that the (C_{j}) appearing in Equation is any circle about (lambda_{j}) that neither touches nor encircles any other root. Suppose that (C_{j}) and (C_{j}') are two such circles and (C_{j}') encloses (C_{j}). Now,

[R_{j,1} = frac{1}{2pi j} int R(z) dz = frac{1}{2pi j} int R(z) dz onumber]

and so

[R_{j,1}^2 = frac{1}{(2pi j)^2} int R(z) dz = frac{1}{2pi j} int R(w) dw onumber]

[R_{j,1}^2 = frac{1}{(2pi j)^2} int int R(z) R(w) dw dz onumber]

[R_{j,1}^2 = frac{1}{(2pi j)^2} int int frac{R(z)-R(w)}{w-z} dw dz onumber]

[R_{j,1}^2 = frac{1}{(2pi j)^2} (int R(z)- int frac{1}{w-z} dw dz - int R(w)- int frac{1}{w-z} dz dw) onumber]

[R_{j,1}^2 = frac{1}{2pi i} int R(z) dz = R_{j,1} onumber]

We used the first resolvent identity, This Transfer Function equation, in moving from the second to the third line. In moving from the fourth to the fifth we used only

[int frac{1}{w-z} dw = 2 pi i onumber]

and

[int frac{1}{w-z} dz = 0 onumber]

The latter integrates to zero because (C_{j}) does not encircle ww

From the definition of orthogonal projections, which states that matrices that equal their squares are projections, we adopt the abbreviation

[P_{j} equiv R_{j,1} onumber]

With respect to the product (P_{j}P_{k}), for (j e k), the calculation runs along the same lines. The difference comes in Equation where, as (C_{j}) lies completely outside of (C_{k}), both integrals are zero. Hence,

If (j e k) then (P_{j}P_{k} = 0)

Along the same lines we define

[D_{j} equiv R_{j,2} onumber]

and prove

If (1 le k le m_{j}-1) then (D_{j}^{k} = R_{j,k+1}) cdot D_{j}^{m_{j}} = 0)

For (k) and (l) greater than or equal to one,

[R_{j,k+1} R_{j,l+1} = frac{1}{(2pi i)^2} int R(z)(z-lambda_{j})^{k} dz int R(w)(w-lambda_{j})^{l} dw onumber]

[R_{j,k+1} R_{j,l+1} = frac{1}{(2pi i)^2} int int R(z) R(w)(z-lambda_{j})^{k} (w-lambda_{j})^{l} dw dz onumber]

[R_{j,k+1} R_{j,l+1} = frac{1}{(2pi i)^2} int int frac{R(z)-R(w)}{w-z} (z-lambda_{j})^{k} (w-lambda_{j})^{l} dw dz onumber]

[R_{j,k+1} R_{j,l+1} = frac{1}{(2pi i)^2} int R(z) (z-lambda_{j})^{k} int frac{(w-lambda_{j})^{l}}{w-z} dw dz-frac{1}{(2pi i)^2} int R(w) (w-lambda_{j})^{k} int frac{(z-lambda_{j})^{k}}{w-z} dz dw onumber]

[R_{j,k+1} R_{j,l+1} = frac{1}{2pi i} int R(z) (z-lambda_{j})^{k+l} dz = R_{j,k+l+1} onumber]

because

[int frac{(w-lambda_{j})^{l}}{w-z} dw = 2 pi i (z-lambda_{j})^{l} onumber]

and

[int frac{(z-lambda_{j})^{k}}{w-z} dw = 0 onumber]

With (k = l = 1) we have shown (R_{j,2}^{2} = R_{j,3}) i.e., (D_{j}^{2} = R_{j,3}). Similarly, with (k = 1) and (l = 2) we find (R_{j,2} R_{j,3} = R_{j,4}) i.e., (D_{j}^{3} = R_{j,4}). Continuing in this fashion we find (R_{j,k} R_{j,k+1} = R_{j,k+2} = j), or (D_{j}^{k+1} = R_{j,k+2}). Finally, at (k = m_{j-1}) this becomes

[D_{j}^{m_{j}} = R_{j, m_{j}+1} = frac{1}{2pi i} int R(z)(z-lambda_{j})^{m_{j}} dz = 0 onumber]

by Cauchy's Theorem.

With this we now have the sought after expansion

[R(z) = sum_{j = 1}^{h} frac{1}{z-lambda_{j}} P_{j}+sum_{k = 1}^{m_{j-1}} frac{1}{(z-lambda_{j})^{k+1}} D_{j}^{k} onumber]

along with the verification of a number of the properties laid out in Complex Integration Equations.


Watch the video: Partial Fraction Decomposion (October 2021).