Articles

7.3: The Inverse Laplace Transform- Complex Integration - Mathematics


If (q) is a rational function with poles ({lambda_{j} | j = {1, cdots, h}}) then the inverse Laplace transform of (q) is

[mathscr{L}^{-1}(q)(t) equiv frac{1}{2 pi i} int q(z) e^{zt} dz onumber]

where (C) is a curve that encloses each of the poles of (q)

[mathscr{L}^{-1}(q)(t) = sum_{j = 1}^{h} res(lambda_{j}) onumber]

Let us put this lovely formula to the test. We take our examples from discussion of the Laplace Transform and the inverse Laplace Transform. Let us first compute the inverse Laplace Transform of

[q(z) = frac{1}{(z+1)^2} onumber]

According to Equation it is simply the residue of (q(z)e^{zt}) at (z = -1) i.e.,

[res(-1) = lim_{z ightarrow -1} de^{zt} dz = te^{-t} onumber]

This closes the circle on the example begun in the discussion of the Laplace Transform and continued in exercise one for chapter 6.

For our next example we recall

[mathscr{L} (x_{1}(s)) = frac{0.19(s^2+1.5s+0.27)}{(s+1/6)^{4}(s^3+1.655s^2+0.4978s+0.0039)} onumber]

from the Inverse Laplace Transform. Usingnumde,sym2polyandresidue, seefib4.mfor details, returns

[r_{1} = egin{pmatrix} {0.0029} {262.8394} {-474.1929} {-1.0857} {-9.0930} {-0.3326} {211.3507} end{pmatrix} onumber]

and

[p_{1} = egin{pmatrix} {-1.3565} {-0.2885} {-0.1667} {-0.1667} {-0.1667} {-0.1667} {-0.0100} end{pmatrix} onumber]

You will be asked in the exercises to show that this indeed jibes with the

[x_{1}(t) = 211.35e^{frac{-t}{100}}-(0.0554t^3+4.5464t^2+1.085t+474.19)e^{frac{-t}{6}}+e^{frac{-329t}{400}}(262.842 cosh (frac{sqrt{73}t}{16})+262.836 sinh (frac{sqrt{73}t}{16})) onumber]

achieved in the Laplace Transform viailaplace.


7.3: The Inverse Laplace Transform- Complex Integration - Mathematics

Finding the Laplace transform of a function is not terribly difficult if we’ve got a table of transforms in front of us to use as we saw in the last section. What we would like to do now is go the other way.

We are going to be given a transform, (F(s)), and ask what function (or functions) did we have originally. As you will see this can be a more complicated and lengthy process than taking transforms. In these cases we say that we are finding the Inverse Laplace Transform of (F(s)) and use the following notation.

As with Laplace transforms, we’ve got the following fact to help us take the inverse transform.

Given the two Laplace transforms (F(s)) and (G(s)) then

for any constants (a) and (b).

So, we take the inverse transform of the individual transforms, put any constants back in and then add or subtract the results back up.

Let’s take a look at a couple of fairly simple inverse transforms.

  1. (displaystyle Fleft( s ight) = frac<6>- frac<1><> + frac<4><>)
  2. (displaystyle Hleft( s ight) = frac<<19>><> - frac<1><<3s - 5>> + frac<7><<>>)
  3. (displaystyle Fleft( s ight) = frac<<6s>><<+ 25>> + frac<3><<+ 25>>)
  4. (displaystyle Gleft( s ight) = frac<8><<3+ 12>> + frac<3><<- 49>>)

We’ve always felt that the key to doing inverse transforms is to look at the denominator and try to identify what you’ve got based on that. If there is only one entry in the table that has that particular denominator, the next step is to make sure the numerator is correctly set up for the inverse transform process. If it isn’t, correct it (this is always easy to do) and then take the inverse transform.

If there is more than one entry in the table that has a particular denominator, then the numerators of each will be different, so go up to the numerator and see which one you’ve got. If you need to correct the numerator to get it into the correct form and then take the inverse transform.

So, with this advice in mind let’s see if we can take some inverse transforms.

From the denominator of the first term it looks like the first term is just a constant. The correct numerator for this term is a “1” so we’ll just factor the 6 out before taking the inverse transform. The second term appears to be an exponential with (a = 8) and the numerator is exactly what it needs to be. The third term also appears to be an exponential, only this time (a = 3) and we’ll need to factor the 4 out before taking the inverse transforms.

So, with a little more detail than we’ll usually put into these,

The first term in this case looks like an exponential with (a = - 2) and we’ll need to factor out the 19. Be careful with negative signs in these problems, it’s very easy to lose track of them.

The second term almost looks like an exponential, except that it’s got a (3s) instead of just an (s) in the denominator. It is an exponential, but in this case, we’ll need to factor a 3 out of the denominator before taking the inverse transform.

The denominator of the third term appears to be #3 in the table with (n = 4). The numerator however, is not correct for this. There is currently a 7 in the numerator and we need a (4! = 24) in the numerator. This is very easy to fix. Whenever a numerator is off by a multiplicative constant, as in this case, all we need to do is put the constant that we need in the numerator. We will just need to remember to take it back out by dividing by the same constant.

So, let’s first rewrite the transform.

So, what did we do here? We factored the 19 out of the first term. We factored the 3 out of the denominator of the second term since it can’t be there for the inverse transform and in the third term we factored everything out of the numerator except the 4! since that is the portion that we need in the numerator for the inverse transform process.

Let’s now take the inverse transform.

In this part we’ve got the same denominator in both terms and our table tells us that we’ve either got #7 or #8. The numerators will tell us which we’ve actually got. The first one has an (s) in the numerator and so this means that the first term must be #8 and we’ll need to factor the 6 out of the numerator in this case. The second term has only a constant in the numerator and so this term must be #7, however, in order for this to be exactly #7 we’ll need to multiply/divide a 5 in the numerator to get it correct for the table.

Taking the inverse transform gives,

[fleft( t ight) = 6cos left( <5t> ight) + frac<3><5>sin left( <5t> ight)]

In this case the first term will be a sine once we factor a 3 out of the denominator, while the second term appears to be a hyperbolic sine (#17). Again, be careful with the difference between these two. Both of the terms will also need to have their numerators fixed up. Here is the transform once we’re done rewriting it.

Notice that in the first term we took advantage of the fact that we could get the 2 in the numerator that we needed by factoring the 8. The inverse transform is then,

[gleft( t ight) = frac<4><3>sin left( <2t> ight) + frac<3><7>sinh left( <7t> ight)]

So, probably the best way to identify the transform is by looking at the denominator. If there is more than one possibility use the numerator to identify the correct one. Fix up the numerator if needed to get it into the form needed for the inverse transform process. Finally, take the inverse transform.

Let’s do some slightly harder problems. These are a little more involved than the first set.

  1. (displaystyle Fleft( s ight) = frac<<6s - 5>><<+ 7>>)
  2. (displaystyle Fleft( s ight) = frac<<1 - 3s>><<+ 8s + 21>>)
  3. (displaystyle Gleft( s ight) = frac<<3s - 2>><<2- 6s - 2>>)
  4. (displaystyle Hleft( s ight) = frac<><<- 3s - 10>>)

From the denominator of this one it appears that it is either a sine or a cosine. However, the numerator doesn’t match up to either of these in the table. A cosine wants just an (s) in the numerator with at most a multiplicative constant, while a sine wants only a constant and no (s) in the numerator.

We’ve got both in the numerator. This is easy to fix however. We will just split up the transform into two terms and then do inverse transforms.

Do not get too used to always getting the perfect squares in sines and cosines that we saw in the first set of examples. More often than not (at least in my class) they won’t be perfect squares!

In this case there are no denominators in our table that look like this. We can however make the denominator look like one of the denominators in the table by completing the square on the denominator. So, let’s do that first.

Recall that in completing the square you take half the coefficient of the (s), square this, and then add and subtract the result to the polynomial. After doing this the first three terms should factor as a perfect square.

So, the transform can be written as the following.

Okay, with this rewrite it looks like we’ve got #19 and/or #20’s from our table of transforms. However, note that in order for it to be a #19 we want just a constant in the numerator and in order to be a #20 we need an (s – a) in the numerator. We’ve got neither of these, so we’ll have to correct the numerator to get it into proper form.

In correcting the numerator always get the (s – a) first. This is the important part. We will also need to be careful of the 3 that sits in front of the (s). One way to take care of this is to break the term into two pieces, factor the 3 out of the second and then fix up the numerator of this term. This will work however, it will put three terms into our answer and there are really only two terms.

So, we will leave the transform as a single term and correct it as follows,

We needed an (s + 4) in the numerator, so we put that in. We just needed to make sure and take the 4 back out by subtracting it back out. Also, because of the 3 multiplying the (s) we needed to do all this inside a set of parenthesis. Then we partially multiplied the 3 through the second term and combined the constants. With the transform in this form, we can break it up into two transforms each of which are in the tables and so we can do inverse transforms on them,

This one is similar to the last one. We just need to be careful with the completing the square however. The first thing that we should do is factor a 2 out of the denominator, then complete the square. Remember that when completing the square a coefficient of 1 on the (s^<2>) term is needed! So, here’s the work for this transform.

So, it looks like we’ve got #21 and #22 with a corrected numerator. Here’s the work for that and the inverse transform.

In correcting the numerator of the second term, notice that I only put in the square root since we already had the “over 2” part of the fraction that we needed in the numerator.

This one appears to be similar to the previous two, but it actually isn’t. The denominators in the previous two couldn’t be easily factored. In this case the denominator does factor and so we need to deal with it differently. Here is the transform with the factored denominator.

The denominator of this transform seems to suggest that we’ve got a couple of exponentials, however in order to be exponentials there can only be a single term in the denominator and no (s)’s in the numerator.

To fix this we will need to do partial fractions on this transform. In this case the partial fraction decomposition will be

Don’t remember how to do partial fractions? In this example we’ll show you one way of getting the values of the constants and after this example we’ll review how to get the correct form of the partial fraction decomposition.

Okay, so let’s get the constants. There is a method for finding the constants that will always work, however it can lead to more work than is sometimes required. Eventually, we will need that method, however in this case there is an easier way to find the constants.

Regardless of the method used, the first step is to actually add the two terms back up. This gives the following.

Now, this needs to be true for any (s) that we should choose to put in. So, since the denominators are the same we just need to get the numerators equal. Therefore, set the numerators equal.

[s + 7 = Aleft( ight) + Bleft( ight)]

Again, this must be true for ANY value of (s) that we want to put in. So, let’s take advantage of that. If it must be true for any value of (s) then it must be true for (s = - 2), to pick a value at random. In this case we get,

[5 = Aleft( < - 7> ight) + Bleft( 0 ight)hspace <0.25in>Rightarrow hspace <0.25in>A = - frac<5><7>]

We found (A) by appropriately picking (s). We can (B) in the same way if we chose (s = 5).

[12 = Aleft( 0 ight) + Bleft( 7 ight)hspace <0.25in>Rightarrow hspace<0.25in>B = frac<<12>><7>]

This will not always work, but when it does it will usually simplify the work considerably.

So, with these constants the transform becomes,

We can now easily do the inverse transform to get,

The last part of this example needed partial fractions to get the inverse transform. When we finally get back to differential equations and we start using Laplace transforms to solve them, you will quickly come to understand that partial fractions are a fact of life in these problems. Almost every problem will require partial fractions to one degree or another.

Note that we could have done the last part of this example as we had done the previous two parts. If we had we would have gotten hyperbolic functions. However, recalling the definition of the hyperbolic functions we could have written the result in the form we got from the way we worked our problem. However, most students have a better feel for exponentials than they do for hyperbolic functions and so it’s usually best to just use partial fractions and get the answer in terms of exponentials. It may be a little more work, but it will give a nicer (and easier to work with) form of the answer.

Be warned that in my class I’ve got a rule that if the denominator can be factored with integer coefficients then it must be.

So, let’s remind you how to get the correct partial fraction decomposition. The first step is to factor the denominator as much as possible. Then for each term in the denominator we will use the following table to get a term or terms for our partial fraction decomposition.

Factor in
denominator
Term in partial
fraction decomposition
(ax + b) (displaystyle frac<>)
( ight)^k>) (displaystyle frac<<>><> + frac<<>> <<< ight)>^2>>> + cdots + frac<<>> <<< ight)>^k>>>)
(a + bx + c) (displaystyle frac<> <<>+ bx + c>>)
( + bx + c> ight)^k>) (displaystyle frac<<x + >> <<>+ bx + c>> + frac<<x + >> <<<+ bx + c> ight)>^2>>> + cdots + frac<<x + >> <<<+ bx + c> ight)>^k>>>)

Notice that the first and third cases are really special cases of the second and fourth cases respectively.

So, let’s do a couple more examples to remind you how to do partial fractions.

  1. (displaystyle Gleft( s ight) = frac<<86s - 78>>< ight)left( ight)left( <5s - 1> ight)>>)
  2. (displaystyle Fleft( s ight) = frac<<2 - 5s>>< ight)left( <+ 11> ight)>>)
  3. (displaystyle Gleft( s ight) = frac<<25>><<left( <+ 4s + 5> ight)>>)

Here’s the partial fraction decomposition for this part.

Now, this time we won’t go into quite the detail as we did in the last example. We are after the numerator of the partial fraction decomposition and this is usually easy enough to do in our heads. Therefore, we will go straight to setting numerators equal.

[86s - 78 = Aleft( ight)left( <5s - 1> ight) + Bleft( ight)left( <5s - 1> ight) + Cleft( ight)left( ight)]

As with the last example, we can easily get the constants by correctly picking values of (s).

So, the partial fraction decomposition for this transform is,

Now, in order to actually take the inverse transform we will need to factor a 5 out of the denominator of the last term. The corrected transform as well as its inverse transform is.

So, for the first time we’ve got a quadratic in the denominator. Here’s the decomposition for this part.

Setting numerators equal gives,

[2 - 5s = Aleft( <+ 11> ight) + left( ight)left( ight)]

Okay, in this case we could use (s = 6) to quickly find (A), but that’s all it would give. In this case we will need to go the “long” way around to getting the constants. Note that this way will always work but is sometimes more work than is required.

The “long” way is to completely multiply out the right side and collect like terms.

In order for these two to be equal the coefficients of the (s^<2>), (s) and the constants must all be equal. So, setting coefficients equal gives the following system of equations that can be solved.

Notice that we used (s^<0>) to denote the constants. This is habit on my part and isn’t really required, it’s just what I’m used to doing. Also, the coefficients are fairly messy fractions in this case. Get used to that. They will often be like this when we get back into solving differential equations.

There is a way to make our life a little easier as well with this. Since all of the fractions have a denominator of 47 we’ll factor that out as we plug them back into the decomposition. This will make dealing with them much easier. The partial fraction decomposition is then,

The inverse transform is then.

With this last part do not get excited about the (s^<3>). We can think of this term as

and it becomes a linear term to a power. So, the partial fraction decomposition is

Setting numerators equal and multiplying out gives.

Setting coefficients equal gives the following system.

[left. <egin& : & A + D & = 0 & : & 4A + B + E & = 0 & : & 5A + 4B + C & = 0 & : & 5B + 4C & = 0 & : & 5C & = 25end> ight>hspace <0.25in>Rightarrow hspace<0.25in>A = frac<<11>><5>,B = - 4,C = 5,D = - frac<<11>><5>,E = - frac<<24>><5>]

This system looks messy, but it’s easier to solve than it might look. First, we get (C) for free from the last equation. We can then use the fourth equation to find (B). The third equation will then give (A), etc.

When plugging into the decomposition we’ll get everything with a denominator of 5, then factor that out as we did in the previous part in order to make things easier to deal with.

Note that we also factored a minus sign out of the last two terms. To complete this part we’ll need to complete the square on the later term and fix up a couple of numerators. Here’s that work.

The inverse transform is then.

So, one final time. Partial fractions are a fact of life when using Laplace transforms to solve differential equations. Make sure that you can deal with them.


Integral transforms composition method for transmutations

6.1.2 What is ITCM and how to use it?

The formal algorithm of ITCM is the following. Let us take as input a pair of arbitrary operators A , B , and also connecting with them generalized Fourier transforms F A , F B , which are invertible and act by the formulas

where t is a dual variable and g is an arbitrary function with suitable properties. It is often convenient to choose g ( t ) = − t 2 or g ( t ) = − t α , α ∈ R .

Then the essence of the ITCM is to obtain formally a pair of transmutation operators P and S as the method output by the following formulas:

with arbitrary function w ( t ) . When P and S are transmutation operators intertwining A and B,

A formal checking of (6.3) can be obtained by direct substitution. The main difficulty is the calculation of compositions (6.2) in an explicit integral form, as well as the choice of domains of operators P and S. Also, we should note that the formulas in (6.2) are formal and the situation is possible when one operator, for example P, exists and is generated by the formula P = F A − 1 w ( t ) F B , but its inverse operator S cannot be constructed by the formula F B − 1 1 w ( t ) F A since this integral, for example, diverges. In this case if it is needed to construct an inverse operator for P it is necessary to use some regularization methods.

Let us list the main advantages of the ITCM.

Simplicity – many classes of transmutations are obtained by explicit formulas from elementary basic blocks, which are classical integral transforms .

The ITCM gives by a unified approach all previously explicitly known classes of transmutations.

The ITCM gives by a unified approach many new classes of transmutations for different operators.

The ITCM gives a unified approach to obtain both direct and inverse transmutations in the same composition form.

The ITCM directly leads to estimates of norms of direct and inverse transmutations using known norm estimates for classical integral transforms on different functional spaces.

The ITCM directly leads to connection formulas for solutions to perturbed and unperturbed differential equations.

Some obstacle to apply ITCM is the following one. We know classical integral transforms usually act on standard spaces like L 2 , L p , C k , variable exponent Lebesgue spaces [465] , and so on. But for the application of transmutations to differential equations we usually need some more conditions to hold, say, at zero or at infinity. For these problems we may first construct a transmutation by the ITCM and then expand it to the needed functional classes.

Let us stress that formulas of the type (6.2) of course are not new for integral transforms and their applications to differential equations. But the ITCM is new when applied to transmutation theory! In other fields of integral transforms and connected differential equations theory compositions (6.2) for the choice of the classical Fourier transform leads to famous pseudodifferential operators with symbol function w ( t ) . For the choice of the classical Fourier transform and the function w ( t ) = ( ± i t ) − s we get fractional integrals on the whole real axis, for w ( t ) = | x | − s we get the Riesz potential, for w ( t ) = ( 1 + t 2 ) − s in (6.2) we get the Bessel potential, and for w ( t ) = ( 1 ± i t ) − s we obtain modified Bessel potentials [494] .

The choice for the ITCM algorithm

leads to generalized translation operators of Delsart [315,319,321] . For this case we have to choose in the ITCM algorithm defined by (6.1) – (6.2) the above values (6.4) in which B ν is the Bessel operator (1.87) , F ν is the Hankel transform (1.56) , and j ν is the normalized (or “small”) Bessel function (1.19) . In the same manner other families of operators commuting with a given one may be obtained by the ITCM for the choice A = B , F A = F B with arbitrary functions g ( t ) , w ( t ) (generalized translation commutes with the Bessel operator). In the case of the choice of differential operator A as quantum oscillator and the connected integral transform F A as fractional or quadratic Fourier transform [437] , we may obtain by the ITCM transmutations also for this case [230] . It is possible to apply the ITCM instead of classical approaches for obtaining fractional powers of Bessel operators [230,515,516,527,531] .

Direct applications of the ITCM to multi-dimensional differential operators are obvious in this case t is a vector and g ( t ) , w ( t ) are vector functions in (6.1) – (6.2) . Unfortunately for this case we know and may derive some new explicit transmutations just for simple special cases. But among them are well-known and interesting classes of potentials. In the case of using the ITCM by (6.1) – (6.2) with Fourier transform when w ( t ) is a positive definite quadratic form, we come to elliptic Riesz potentials [475,494] when w ( t ) is an indefinite quadratic form we come to hyperbolic Riesz potentials [426,475,494] when w ( x , t ) = ( | x | 2 − i t ) − α / 2 we come to parabolic potentials [494] . In the case of using the ITCM by (6.1) – (6.2) with Hankel transform and when w ( t ) is a quadratic form we come to elliptic Riesz B-potentials [206,344] or hyperbolic Riesz B-potentials [503] . For all abovementioned potentials we need to use distribution theory and consider for the ITCM convolutions of distributions for inversion of such potentials we need some cutting and approximation procedures (cf. [426,503] ). For this class of problems it is appropriate to use Schwartz and/or Lizorkin spaces for probe functions and dual spaces for distributions.

So we may conclude that the ITCM we consider in this chapter for obtaining transmutations is effective, it is connected to many known methods and problems, it gives all known classes of explicit transmutations, and it works as a tool to construct new classes of transmutations. Application of the ITCM requires the following three steps.

Step 1. For a given pair of operators A , B and connected integral transforms F A , F B , define and calculate a pair of transmutations P , S by basic formulas (6.1) – (6.2) .

Step 2. Derive exact conditions and find classes of functions for which transmutations obtained by step 1 satisfy proper intertwining properties.

Step 3. Apply now correctly defined transmutations by steps 1 and 2 on proper classes of functions to derive connection formulas for solutions of differential equations.

Based on this plan the next part of the chapter is organized as follows. First we illustrate step 1 of the above plan and apply the ITCM for obtaining some new and known transmutations. For step 2 we prove a general theorem for the case of Bessel operators it is enough to complete strict definitions of necessary transmutations and start to solve problems using them. After that we give an example to illustrate step 3 of applying transmutations obtained by ITCM to derive formulas for solutions of a model differential equation.


We first saw these properties in the Table of Laplace Transforms.

Property 1: Linearity Property

Property 2: Shifting Property

If `Lap^<:-1:>G(s) = g(t)`, then `Lap^<:-1:>G(s - a) = e^(at)g(t)`.

Property 3

Property 4

Examples

Find the inverse of the following transforms and sketch the functions so obtained.

(There is no need to use Property (3) above.)

So the inverse Laplace Transform is given by:

The graph of `g(t)` is given by:

and use rule (4) from above:

Here is the graph of our solution:

`=sin 3t cos ((3pi)/2)` `-cos 3t sin ((3pi)/2)`

So the Inverse Laplace transform is given by:

The graph of the function (showing that the switch is turned on at `t=pi/2

Our question involves the product of an exponential expression and a function of s, so we need to use Property (4), which says:

Our exponential expression in the question is e &minuss and since e &minusas = e &minuss in this case, then a = 1.

Then, using function notation,

Putting it all together, we can write the inverse Laplace transform as:

So the inverse Laplace Transform is given by:

The graph of our function (which has value 0 until t = 1) is as follows:


1. Laplace transformation-Conditions and existence

Definitions 1. Transformation

A “Transformation” is an operation which converts a mathematical expression to a different but equivalent form

Let a function f(t) be continuous and defined for positive values of ‘t’. The Laplace transformation of f(t) associates a function s defined by the equation (ma8251 notes engineering mathematics 2 unit 5)

2. Transforms of Elementary functions-Basic Properties

2.1 Problems Based On Transforms Of Elementary Functions- Basic Properties

3. Transforms Of Derivatives And Integrals Of Functions

3.1 Transform of integrals

3.2 Derivatives of transform

3.3 Problems Based On Derivatives Of Transform (ma8251 notes engineering mathematics 2 unit 5)

4 Transforms Of Unit Step Function And Impulse Function

4.1 Problems Based On Unit Step Function (Or) Heaviside’s Unit Step Function

Define the unit step function. Solution: The unit step function, also called Heaviside’s unit function (ma8251 notes engineering mathematics 2 unit 5)

5 Transform Of Periodic Functions Definition:

(Periodic) A function f(x) is said to be “periodic” if and only if f(x+p) = f(x) is true for some value of p and every value of x. The smallest positive value of p for which this equation is true for every value of x will be called the period of the function.

6 Inverse Laplace Transform

a.If L[f(t)] = F(s), then L–1[F(s)] = f(t) where L–1 is called the inverse Laplace transform operator. b.If F1(s) and F2(s) are L.T. of f(t) and g(t) respectively then

7. Convolution theorem (ma8251 notes engineering mathematics 2 unit 5)

8. Initial and final value theorems

9. Solution of linear ODE of Second Order with constant coefficients (ma8251 notes engineering mathematics 2 unit 5)

Subject name ENGINEERING MATHEMATICS 2
Regulation 2017 Regulation

MA8251 Notes Engineering Mathematics 2 Unit 5 Click here to Download


The Inverse Laplace Transform by Partial Fraction Expansion

This technique uses Partial Fraction Expansion to split up a complicated fraction into forms that are in the Laplace Transform table. As you read through this section, you may find it helpful to refer to the review section on partial fraction expansion techniques. The text below assumes you are familiar with that material.

Distinct Real Roots

Consider first an example with distinct real roots.

Example: Distinct Real Roots

Find the inverse Laplace Transform of:

Solution:
We can find the two unknown coefficients using the "cover-up" method.

(where U(t) is the unit step function ) or expressed another way

The unit step function is equal to zero for t<0 and equal to one for t>0. At t=0 the value is generally taken to be either ½ or 1 the choice does not matter for us.

The last two expressions are somewhat cumbersome. Unless there is confusion about the result, we will assume that all of our results are implicitly 0 for t<0, and we will write the result as

Repeated Real Roots

Consider next an example with repeated real roots (in this case at the origin, s=0).

Example: Repeated Real Roots

Find the inverse Laplace Transform of the function F(s).

Solution:
We can find two of the unknown coefficients using the "cover-up" method.

We find the other term using cross-multiplication:

Equating like powers of "s" gives us:

power of "s" Equation
s 2
s 1
s 0

We could have used these relationships to determine A1, A2, and A3. But A1 and A3 were easily found using the "cover-up" method. The top relationship tells us that A2=-0.25, so

(where, again, it is implicit that f(t)=0 when t<0).

Many texts use a method based upon differentiation of the fraction when there are repeated roots. The technique involves differentiation of ratios of polynomials which is prone to errors. Details are here if you are interested.

Complex Roots

Another case that often comes up is that of complex conjugate roots. Consider the fraction:

The second term in the denominator cannot be factored into real terms. This leaves us with two possibilities - either accept the complex roots, or find a way to include the second order term.

Example: Complex Conjugate Roots (Method 1)

Using the complex (first order) roots

Simplify the function F(s) so that it can be looked up in the Laplace Transform table.

Solution:
If we use complex roots, we can expand the fraction as we did before. This is not typically the way you want to proceed if you are working by hand, but may be easier for computer solutions (where complex numbers are handled as easily as real numbers). To perform the expansion, continue as before.

Note that A2 and A3 must be complex conjugates of each other since they are equivalent except for the sign on the imaginary part. Performing the required calculations:

The inverse Laplace Transform is given below (Method 1).

Example: Complex Conjugate Roots (Method 2)

Method 2 - Using the second order polynomial

Simplify the function F(s) so that it can be looked up in the Laplace Transform table.

Solution:
Another way to expand the fraction without resorting to complex numbers is to perform the expansion as follows.

Note that the numerator of the second term is no longer a constant, but is instead a first order polynomial. From above (or using the cover-up method) we know that A=-0.2. We can find the quantities B and C from cross-multiplication.

If we equate like powers of "s" we get

order of
coefficient
left side
coefficient
right side
coefficient
2 nd (s 2 ) 0 A+B
1 st (s 1 ) 1 4A+5B+C
0 th (s 0 ) 3 5A+5C

Since we already know that A=-0.2, the first expression (0=A+B) tells us that B=0.2, and the last expression (3=5A+5C) tells us that C=0.8. We can use the middle expression (1=4A+5B+C) to check our calculations. Finally, we get

The inverse Laplace Transform is given below (Method 2).

Some Comments on the two methods for handling complex roots

The two previous examples have demonstrated two techniques for performing a partial fraction expansion of a term with complex roots. The first technique was a simple extension of the rule for dealing with distinct real roots. It is conceptually simple, but can be difficult when working by hand because of the need for using complex numbers it is easily done by computer. The second technique is easy to do by hand, but is conceptually a bit more difficult. It is easy to show that the two resulting partial fraction representations are equivalent to each other. Let's first examine the result from Method 1 (using two techniques).

We start with Method 1 with no particular simplifications.

Method 1 - brute force technique

We now repeat this calculation, but in the process we develop a general technique (that proves to be useful when using MATLAB to help with the partial fraction expansion. We know that F(s) can be represented as a partial fraction expansion as shown below:

We know that A2 and A3 are complex conjugates of each other:

tan -1 is the arctangent. On computers it is often implemented as "atan". The atan function can give incorrect results (this is because, typically, the function is written so that the result is always in quadrants I or IV, and never in quadrants II and III). To ensure accuracy, use a function that corrects for this. Often the function is "atan".. Also be careful about using degrees and radians as appropriate.

We can now find the inverse transform of the complex conjugate terms by treating them as simple first order terms (with complex roots).

In this expression M=2K. The frequency (&omega) and decay coefficient (σ) are determined from the root of the denominator of A2 (in this case the root of the term is at s=-2+j this is where the term is equal to zero). The frequency is the imaginary part of the root (in this case, ω=1), and the decay coefficient is the real part of the root (in this case, σ=-2).

Using the cover-up method (or, more likely, a computer program) we get

It is easy to show that the final result is equivalent to that previously found, i.e.,

While this method is somewhat difficult to do by hand, it is very convenient to do by computer. This is the approach used on the page that shows MATLAB techniques.

Finally we present Method 2, a technique that is easier to work with when solving problems for hand (for homework or on exams) but is less useful when using MATLAB.

Method 2 - Completing the square

Thus it has been shown that the two methods yield the same result. Use Method 1 with MATLAB and use Method 2 when solving problems with pencil and paper.

Example - Combining multiple expansion methods

Find the inverse Laplace Transform of

Solution:
The fraction shown has a second order term in the denominator that cannot be reduced to first order real terms. As discussed in the page describing partial fraction expansion, we'll use two techniques. The first technique involves expanding the fraction while retaining the second order term with complex roots in the denominator. The second technique entails "Completing the Square."

Since we have a repeated root, let's cross-multiply to get

Then equating like powers of s

Starting at the last equation

The last term is not quite in the form that we want it, but by completing the square we get

Now all of the terms are in forms that are in the Laplace Transform Table (the last term is the entry "generic decaying oscillatory").

Example - Repeat Previous Example, Using Brute Force

We repeat the previous example, but use a brute force technique. You will see that this is harder to do when solving a problem manually, but is the technique used by MATLAB. It is important to be able to interpret the MATLAB solution.

Find the inverse Laplace Transform of

Solution:
We can express this as four terms, including two complex terms (with A3=A4*)

Cross-multiplying we get (using the fact that (s+1-2j)(s+1+2j)=(s 2 +2s+5))

Then equating like powers of s

We could solve by hand, or use MATLAB:

We will use the notation derived above (Method 1 - a more general technique). The root of the denominator of the A3 term in the partial fraction expansion is at s=-1+2j (i.e., the denominator goes to 0 when s=-1+2j), the magnitude of A3 is 𕔆, and the angle of A3 is 225°. So, M=2𕔆, &phi=225°, &omega=2, and &sigma=-1. Solving for f(t) we get

This expression is equivalent to the one obtained in the previous example.

Order of numerator polynomial equals order of denominator

When the Laplace Domain Function is not strictly proper (i.e., the order of the numerator is different than that of the denominator) we can not immediatley apply the techniques described above.

Example: Order of Numerator Equals Order of Denominator

Find the inverse Laplace Transform of the function F(s).

Solution:
For the fraction shown below, the order of the numerator polynomial is not less than that of the denominator polynomial, therefore we first perform long division

Now we can express the fraction as a constant plus a strictly proper ratio of polynomials.

By "strictly proper" we mean that the order of the denominator polynomial is greater than that of the numerator polynomial'

Using the cover up method to get A1 and A2 we get

Exponentials in the numerator

The last case we will consider is that of exponentials in the numerator of the function.

Example: Exponentials in the numerator

Find the inverse Laplace Transform of the function F(s).

Solution:
The exponential terms indicate a time delay (see the time delay property). The first thing we need to do is collect terms that have the same time delay.

We now perform a partial fraction expansion for each time delay term (in this case we only need to perform the expansion for the term with the 1.5 second delay), but in general you must do a complete expansion for each term.

Now we can do the inverse Laplace Transform of each term (with the appropriate time delays)

The step function that multiplies the first term could be left off and we would assume it to be implicit. It is included here for consistency with the other two terms.


7.3: The Inverse Laplace Transform- Complex Integration - Mathematics

It’s now time to get back to differential equations. We’ve spent the last three sections learning how to take Laplace transforms and how to take inverse Laplace transforms. These are going to be invaluable skills for the next couple of sections so don’t forget what we learned there.

Before proceeding into differential equations we will need one more formula. We will need to know how to take the Laplace transform of a derivative. First recall that (f^<(n)>) denotes the (n^>) derivative of the function (f). We now have the following fact.

Suppose that (f), (f'), (f''),… (f^<(n-1)>) are all continuous functions and (f^<(n)>) is a piecewise continuous function. Then,

[mathcalleft< <>> ight> = Fleft( s ight) - <>>fleft( 0 ight) - <>>f'left( 0 ight) - cdots - s ight)>>left( 0 ight) - ight)>>left( 0 ight)]

Since we are going to be dealing with second order differential equations it will be convenient to have the Laplace transform of the first two derivatives.

[eginmathcalleft < ight> & = sYleft( s ight) - yleft( 0 ight) mathcalleft < ight> & = Yleft( s ight) - syleft( 0 ight) - y'left( 0 ight)end]

Notice that the two function evaluations that appear in these formulas, (yleft( 0 ight)) and (y'left( 0 ight)), are often what we’ve been using for initial condition in our IVP’s. So, this means that if we are to use these formulas to solve an IVP we will need initial conditions at (t = 0).

While Laplace transforms are particularly useful for nonhomogeneous differential equations which have Heaviside functions in the forcing function we’ll start off with a couple of fairly simple problems to illustrate how the process works.

The first step in using Laplace transforms to solve an IVP is to take the transform of every term in the differential equation.

[mathcalleft < ight> - 10mathcalleft < ight> + 9mathcalleft < y ight>= mathcalleft < <5t> ight>]

Using the appropriate formulas from our table of Laplace transforms gives us the following.

[Yleft( s ight) - syleft( 0 ight) - y'left( 0 ight) - 10left( ight) + 9Yleft( s ight) = frac<5><<>>]

Plug in the initial conditions and collect all the terms that have a (Y(s)) in them.

[left( <- 10s + 9> ight)Yleft( s ight) + s - 12 = frac<5><<>>]

At this point it’s convenient to recall just what we’re trying to do. We are trying to find the solution, (y(t)), to an IVP. What we’ve managed to find at this point is not the solution, but its Laplace transform. So, in order to find the solution all that we need to do is to take the inverse transform.

Before doing that let’s notice that in its present form we will have to do partial fractions twice. However, if we combine the two terms up we will only be doing partial fractions once. Not only that, but the denominator for the combined term will be identical to the denominator of the first term. This means that we are going to partial fraction up a term with that denominator no matter what so we might as well make the numerator slightly messier and then just partial fraction once.

This is one of those things where we are apparently making the problem messier, but in the process we are going to save ourselves a fair amount of work!

Combining the two terms gives,

The partial fraction decomposition for this transform is,

Setting numerators equal gives,

[5 + 12 - = Asleft( ight)left( ight) + Bleft( ight)left( ight) + Cleft( ight) + Dleft( ight)]

Picking appropriate values of (s) and solving for the constants gives,

Plugging in the constants gives,

Finally taking the inverse transform gives us the solution to the IVP.

That was a fair amount of work for a problem that probably could have been solved much quicker using the techniques from the previous chapter. The point of this problem however, was to show how we would use Laplace transforms to solve an IVP.

There are a couple of things to note here about using Laplace transforms to solve an IVP. First, using Laplace transforms reduces a differential equation down to an algebra problem. In the case of the last example the algebra was probably more complicated than the straight forward approach from the last chapter. However, in later problems this will be reversed. The algebra, while still very messy, will often be easier than a straight forward approach.

Second, unlike the approach in the last chapter, we did not need to first find a general solution, differentiate this, plug in the initial conditions and then solve for the constants to get the solution. With Laplace transforms, the initial conditions are applied during the first step and at the end we get the actual solution instead of a general solution.

In many of the later problems Laplace transforms will make the problems significantly easier to work than if we had done the straight forward approach of the last chapter. Also, as we will see, there are some differential equations that simply can’t be done using the techniques from the last chapter and so, in those cases, Laplace transforms will be our only solution.

Let’s take a look at another fairly simple problem.

As with the first example, let’s first take the Laplace transform of all the terms in the differential equation. We’ll the plug in the initial conditions to get,

[egin2left( <Yleft( s ight) - syleft( 0 ight) - y'left( 0 ight)> ight) + 3left( ight) - 2Yleft( s ight) & = frac<1> <<< ight)>^2>>> left( <2+ 3s - 2> ight)Yleft( s ight) + 4 & = frac<1> <<< ight)>^2>>>end]

Now, as we did in the last example we’ll go ahead and combine the two terms together as we will have to partial fraction up the first denominator anyway, so we may as well make the numerator a little more complex and just do a single partial fraction. This will give,

The partial fraction decomposition is then,

Setting numerator equal gives,

In this case it’s probably easier to just set coefficients equal and solve the resulting system of equation rather than pick values of (s). So, here is the system and its solution.

[left. <egin& : & A + 2B & = 0& : & 6A + 7B + 2C & = - 4 & : & 12A + 4B + 3C + 2D & = - 16& : & 8A - 4B - 2C - D & = - 15end> ight>hspace <0.25in>Rightarrow hspace<0.25in>eginA & = - frac<<192>><<125>> & B & = frac<<96>><<125>> C & = - frac<2><<25>> & D & = - frac<1><5>end]

We will get a common denominator of 125 on all these coefficients and factor that out when we go to plug them back into the transform. Doing this gives,

Notice that we also had to factor a 2 out of the denominator of the first term and fix up the numerator of the last term in order to get them to match up to the correct entries in our table of transforms.

Taking the inverse transform then gives,

Take the Laplace transform of everything and plug in the initial conditions.

[eginYleft( s ight) - syleft( 0 ight) - y'left( 0 ight) - 6left( ight) + 15Yleft( s ight) & = 2frac<3> <<+ 9>> left( <- 6s + 15> ight)Yleft( s ight) + s - 2 & = frac<6> <<+ 9>>end]

Now solve for (Y(s)) and combine into a single term as we did in the previous two examples.

[Yleft( s ight) = frac << - + 2 - 9s + 24>> <+ 9> ight)left( <- 6s + 15> ight)>>]

Now, do the partial fractions on this. First let’s get the partial fraction decomposition.

Now, setting numerators equal gives,

Setting coefficients equal and solving for the constants gives,

[left. egin& : & A + C & = - 1 & : & - 6A + B + D & = 2 & : & 15A - 6B + 9C & = - 9 & : & 15B + 9D & = 24end ight>hspace <0.25in>Rightarrow hspace<0.25in>egin A & = frac<1><<10>> & B & = frac<1><<10>> C & = - frac<<11>><<10>> & D & = frac<5><2>end]

Now, plug these into the decomposition, complete the square on the denominator of the second term and then fix up the numerators for the inverse transform process.

Finally, take the inverse transform.

To this point we’ve only looked at IVP’s in which the initial values were at (t = 0). This is because we need the initial values to be at this point in order to take the Laplace transform of the derivatives. The problem with all of this is that there are IVP’s out there in the world that have initial values at places other than (t = 0). Laplace transforms would not be as useful as it is if we couldn’t use it on these types of IVP’s. So, we need to take a look at an example in which the initial conditions are not at (t = 0) in order to see how to handle these kinds of problems.

The first thing that we will need to do here is to take care of the fact that initial conditions are not at (t = 0). The only way that we can take the Laplace transform of the derivatives is to have the initial conditions at (t = 0).

This means that we will need to formulate the IVP in such a way that the initial conditions are at (t = 0). This is actually fairly simple to do, however we will need to do a change of variable to make it work. We are going to define

[eta = t - 3hspace <0.25in>Rightarrow hspace<0.25in>,,,t = eta + 3]

Let’s start with the original differential equation.

[y''left( t ight) + 4y'left( t ight) = cos left( ight) + 4t]

Notice that we put in the (left( t ight)) part on the derivatives to make sure that we get things correct here. We will next substitute in for (t).

[y''left( ight) + 4y'left( ight) = cos left( eta ight) + 4left( ight)]

Now, to simplify life a little let’s define,

[uleft( eta ight) = yleft( ight)]

Then, by the chain rule, we get the following for the first derivative.

By a similar argument we get the following for the second derivative.

[u''left( eta ight) = y''left( ight)]

The initial conditions for (uleft( eta ight)) are,

[eginuleft( 0 ight) & = yleft( <0 + 3> ight) = yleft( 3 ight) = 0 u'left( 0 ight) & = y'left( <0 + 3> ight) = y'left( 3 ight) = 7end]

The IVP under these new variables is then,

[u'' + 4u' = cos left( eta ight) + 4eta + 12,hspace<0.25in>uleft( 0 ight) = 0,,,,,,,,u'left( 0 ight) = 7]

This is an IVP that we can use Laplace transforms on provided we replace all the (t)’s in our table with (eta )’s. So, taking the Laplace transform of this new differential equation and plugging in the new initial conditions gives,

Note that unlike the previous examples we did not completely combine all the terms this time. In all the previous examples we did this because the denominator of one of the terms was the common denominator for all the terms. Therefore, upon combining, all we did was make the numerator a little messier and reduced the number of partial fractions required down from two to one. Note that all the terms in this transform that had only powers of (s) in the denominator were combined for exactly this reason.

In this transform however, if we combined both of the remaining terms into a single term we would be left with a fairly involved partial fraction problem. Therefore, in this case, it would probably be easier to just do partial fractions twice. We’ve done several partial fractions problems in this section and many partial fraction problems in the previous couple of sections so we’re going to leave the details of the partial fractioning to you to check. Partial fractioning each of the terms in our transform gives us the following.

Plugging these into our transform and combining like terms gives us

Now, taking the inverse transform will give the solution to our new IVP. Don’t forget to use (eta )’s instead of (t)’s!

This is not the solution that we are after of course. We are after (y(t)). However, we can get this by noticing that

[yleft( t ight) = yleft( ight) = uleft( eta ight) = uleft( ight)]

So, the solution to the original IVP is,

So, we can now do IVP’s that don’t have initial conditions that are at (t = 0). We also saw in the last example that it isn’t always the best to combine all the terms into a single partial fraction problem as we have been doing prior to this example.

The examples worked in this section would have been just as easy, if not easier, if we had used techniques from the previous chapter. They were worked here using Laplace transforms to illustrate the technique and method.


Inverse Laplace Transform using contour integration

So math stack exchange isn't really helping much with this. So initially, I'm proving the inverse laplace transform using contour integration. This is a good starting point for my research when I eventually need to find the inverse laplace transform when the functions could not be found in tables. I need to prove that: $DeclareMathOperator mathscr^ <-1>igg[ frac<1>ig(exp (- sqrtx)ig) igg] = erfcleft(frac<2sqrt> ight) $ This inverse laplace transform can be found using a table of Laplace Transforms. Using the following contour:

Source: https://tex.stackexchange.com/questions/269684/hankel-bromwich-contour-problem

Then, after considering all contributions of this contour to get: $ mathscr^ <-1>igg[ frac<1>ig(exp (- sqrtx)ig) igg] = 1 - frac<1> int_<0>^ exp(-ut) sin(sqrt x) frac $ Here, we can simplify the integral by letting: $v^ <2>= ut$ and $y = x/sqrt$ to get: $ mathscr^ <-1>igg[ frac<1>ig(exp (- sqrtx)ig) igg] = 1 - frac<2> int_<0>^ exp(-v) sin(yv) frac. $ How do I continue from here to eventually get to : $ 1 - erfleft(frac<2> ight) = 1 - erfleft(frac<2sqrt> ight) $


Aktosun, Spring 2018, Math 3318

Textbook: C. H. Edwards, D. E. Penney, and D. T. Calvis, Differential Equations and Boundary Value Problems, 5th ed., Pearson, Boston, 2015.

Coverage: We will not rely on the textbook much. The textbook is for you to read the material from another viewpoint. Roughly, we will cover some materials in Chapters 1, 2, 3, 4, 5, and 7. The details of the coverage of the material is given in the course outline .

Grading: Letter grades will be assigned based on the three exam grades, with the lowest exam score contributing 20% and the other two exam scores each contributing 40% to the course grade. The grade out of 100 on the first exam is equal to 2.50433X+25.611, where X is the number of correct answers on the first exam. The grade out of 100 on the second exam is equal to 2.625X+23.742, where X is the number of correct answers on the second exam. The grade out of 100 on the third exam is equal to 2.56X+23.2, where X is the number of correct answers on the third exam. All grades out of 100 will correspond to the following scale: 0 < F< 60, 60 < D < 70, 70 < C < 80, 80 < B < 90, 90 < A < 100.

Exams: Exam 1: Thursday, February 22 (during class) Exam 2: Thursday, April 12 (during class) Exam 3 (Final Exam): Thursday, May 10 during 8:00-10:30 am in PKH 107.

Information on each exam: Each exam will contain 30 questions (the first 15 are true/false questions and the remaining 15 are multiple-choice questions with 4 options to choose from). No materials are allowed during the exams besides a pencil or a pen blank sheets are provided on each exam an answer sheet to put the answers on will be provided and will be collected at the end of each exam. Separate information related to the coverage will be provided for each exam.

Prerequisites: A grade of C or better in Math 2326 or concurrent enrollment.

Math clinic: Free help is available for this course at the Math Clinic located in PKH 325 (on the third floor of Pickard Hall just across from the elevator). One of the doctoral mathematics students, Ms. Niloofar Ghorbani, is available at the Math Clinic during 8-9 am on Tuesdays and Wednesday, to provide help to students enrolled in Math 3318. Those of you who need some mathematical help for Math 3318, you can get help from Ms. Ghorbani during those times (Tuesday and Wednesday during 8-9 am) in the Math Clinic (PKH 325).

  • ODE: general solution, particular solution, explicit solution, implicit solution arbitrary constants, initial conditions
  • linear ODEs, nonlinear ODEs, linear homogeneous ODEs
  • First-order ODEs: linear, separable, exact, homogeneous, Bernoulli methods to solve such ODEs
  • First-order linear ODEs: standard form, an integrating factor
  • Differential, exact differential, total differential criteria for exactness
  • Substitution for Bernoulli equations, substitution for homogeneous equations
  • Linear ODEs: nonhomogeneous term, homogeneous linear ODE, superposition principle, general solution, particular solution
  • Linear nth-order ODEs: with constant coefficients, Cauchy-Euler equations functions satisfying such homogeneous linear ODEs
  • higher-order linear ODEs: general solution, particular solution, arbitrary constants, initial conditions
  • linear ODEs: the corresponding homogeneous ODE, general solution to the corresponding ODE, linearly independent solutions
  • linear homogeneous ODEs with constant coefficients, the corresponding auxiliary equation, the operator notation D=d/dx
  • linear homogeneous ODEs with constant coefficients: y=e rx , y=xe rx with a repeated root, y=e ax cos(bx) and y=e ax sin(bx) with complex roots r=a±ib
  • Cauchy-Euler equations: y=x r , y=(lnx)x r with a repeated root, y=x a cos(b(lnx)) and y=x a sin(b(lnx)) with complex roots r=a±ib
  • method of undetermined coefficients to find a particular solution
  • given the general solution, find the corresponding ODE
  • method of reduction of order
  • higher-order linear ODEs: general solution, particular solution, arbitrary constants, initial conditions
  • linear homogeneous ODEs with constant coefficients, the corresponding auxiliary equation, the operator notation D=d/dx
  • linear homogeneous ODEs with constant coefficients: y=e rx , y=xe rx with a repeated root, y=e ax cos(bx) and y=e ax sin(bx) with complex roots r=a±ib
  • Cauchy-Euler equations: y=x r , y=(lnx)x r with a repeated root, y=x a cos(b(lnx)) and y=x a sin(b(lnx)) with complex roots r=a±ib
  • method of undetermined coefficients to find a particular solution
  • given the general solution, find the corresponding ODE
  • Laplace transform, Laplace transform formulas
  • inverse Laplace transform, inverse Laplace transform formulas
  • unit step function
  • solving initial value problems by using Laplace transform

Supplementary problems 6
Brief answers to supplementary problems 6
Solution to supplementary problems 6
Practice problems from the textbook: 1.1 Differential equations: 1-26, 37-42 1.2 General and particular solutions: 1-18 1.4 Separable equations: 1-28 1.5 Linear first-order equations: 1-28 1.6 Exact equations: 1-40, 43-54 3.1 Second-order linear equations: 1-28, 36-42 3.2 General solutions to linear equations: 1-24 3.3 Homogeneous equations with constant coefficients: 1-32 3.5 Method of undetermined coefficients: 1-40 7.1 Laplace transform: 1-32 7.2 Solving initial-value problems: 1-24 7.3 Further properties for Laplace transform: 1-24