Articles

3.4: Rates of Change and Behavior of Graphs


Learning Objectives

  • Find the average rate of change of a function.
  • Use a graph to determine where a function is increasing, decreasing, or constant.
  • Use a graph to locate local maxima and local minima.
  • Use a graph to locate the absolute maximum and absolute minimum.

Gasoline costs have experienced some wild fluctuations over the last several decades. Table (PageIndex{1}) lists the average cost, in dollars, of a gallon of gasoline for the years 2005–2012. The cost of gasoline can be considered as a function of year.

Table (PageIndex{1})
(y)20052006200720082009201020112012
(C(y))2.312.622.843.302.412.843.583.68

If we were interested only in how the gasoline prices changed between 2005 and 2012, we could compute that the cost per gallon had increased from $2.31 to $3.68, an increase of $1.37. While this is interesting, it might be more useful to look at how much the price changed per year. In this section, we will investigate changes such as these.

Finding the Average Rate of Change of a Function

The price change per year is a rate of change because it describes how an output quantity changes relative to the change in the input quantity. We can see that the price of gasoline in Table (PageIndex{1}) did not change by the same amount each year, so the rate of change was not constant. If we use only the beginning and ending data, we would be finding the average rate of change over the specified period of time. To find the average rate of change, we divide the change in the output value by the change in the input value.

[egin{align*} ext{Average rate of change}&=dfrac{ ext{Change in output}}{ ext{Change in input}} [4pt] &=dfrac{Delta y}{Delta x}[4pt] &=dfrac{y_2-y_1}{x_2-x_1}[4pt] &=dfrac{f(x_2)-f(x_1)}{x_2-x_1}end{align*} label{1.3.1}]

The Greek letter (Delta) (delta) signifies the change in a quantity; we read the ratio as “delta-(y) over delta-(x)” or “the change in (y) divided by the change in (x).” Occasionally we write (Delta f) instead of (Delta y), which still represents the change in the function’s output value resulting from a change to its input value. It does not mean we are changing the function into some other function.

In our example, the gasoline price increased by $1.37 from 2005 to 2012. Over 7 years, the average rate of change was

[dfrac{Delta y}{Delta x}=dfrac{$1.37}{7 ext{years}}approx ext{0.196 dollars per year.} label{1.3.2}]

On average, the price of gas increased by about 19.6¢ each year. Other examples of rates of change include:

  • A population of rats increasing by 40 rats per week
  • A car traveling 68 miles per hour (distance traveled changes by 68 miles each hour as time passes)
  • A car driving 27 miles per gallon (distance traveled changes by 27 miles for each gallon)
  • The current through an electrical circuit increasing by 0.125 amperes for every volt of increased voltage
  • The amount of money in a college account decreasing by $4,000 per quarter

Definition: Rate of Change

A rate of change describes how an output quantity changes relative to the change in the input quantity. The units on a rate of change are “output units per input units.”

The average rate of change between two input values is the total change of the function values (output values) divided by the change in the input values.

[dfrac{Delta y}{Delta x}=dfrac{f(x_2)-f(x_1)}{x_2-x_1}]

How To...

Given the value of a function at different points, calculate the average rate of change of a function for the interval between two values (x_1) and (x_2).

  1. Calculate the difference (y_2−y_1=Delta y).
  2. Calculate the difference (x_2−x_1=Delta x).
  3. Find the ratio (dfrac{Delta y}{Delta x}).

Example (PageIndex{1}): Computing an Average Rate of Change

Using the data in Table (PageIndex{1}), find the average rate of change of the price of gasoline between 2007 and 2009.

Solution

In 2007, the price of gasoline was $2.84. In 2009, the cost was $2.41. The average rate of change is

[egin{align*} dfrac{Delta y}{Delta x}&=dfrac{y_2−y_1}{x_2−x_1} [4pt] &=dfrac{$2.41−$2.84}{2009−2007} [4pt] &=dfrac{−$0.43}{2 ext{ years}} [4pt] &=−$0.22 ext{ per year} end{align*}]

Analysis

Note that a decrease is expressed by a negative change or “negative increase.” A rate of change is negative when the output decreases as the input increases or when the output increases as the input decreases.

Exercise (PageIndex{1})

Using the data in Table (PageIndex{1}), find the average rate of change between 2005 and 2010.

Solution

(dfrac{$2.84−$2.315}{5 ext{ years}} =dfrac{$0.535}{5 ext{ years}} =$0.106 ext{per year.})

Example (PageIndex{2}): Computing Average Rate of Change from a Graph

Given the function (g(t)) shown in Figure (PageIndex{1}), find the average rate of change on the interval ([−1,2]).

Solution

At (t=−1), Figure (PageIndex{2}) shows (g(−1)=4). At (t=2),the graph shows (g(2)=1).

The horizontal change (Delta t=3) is shown by the red arrow, and the vertical change (Delta g(t)=−3) is shown by the turquoise arrow. The output changes by –3 while the input changes by 3, giving an average rate of change of

[dfrac{1−4}{2−(−1)}=dfrac{−3}{3}=−1]

Analysis

Note that the order we choose is very important. If, for example, we use (dfrac{y_2−y_1}{x_1−x_2}), we will not get the correct answer. Decide which point will be 1 and which point will be 2, and keep the coordinates fixed as ((x_1,y_1)) and ((x_2,y_2)).

Example (PageIndex{3}): Computing Average Rate of Change from a Table

After picking up a friend who lives 10 miles away, Anna records her distance from home over time. The values are shown in Table (PageIndex{2}). Find her average speed over the first 6 hours.

Table (PageIndex{2})

t (hours)

01234567
D(t)(miles)105590153214240282300

Solution

Here, the average speed is the average rate of change. She traveled 282 miles in 6 hours, for an average speed of

[egin{align*}dfrac{282−10}{6−0}&=dfrac{272}{6}[4pt] &approx 45.3end{align*}]

The average speed is about 45.3 miles per hour.

Analysis

Because the speed is not constant, the average speed depends on the interval chosen. For the interval ([2,3]), the average speed is 63 miles per hour.

Example (PageIndex{4}): Computing Average Rate of Change for a Function Expressed as a Formula

Compute the average rate of change of (f(x)=x^2−frac{1}{x}) on the interval ([2, 4]).

Solution

We can start by computing the function values at each endpoint of the interval.

[egin{align*}f(2)&=2^2−frac{1}{2} f(4)&=4^2−frac{1}{4} [4pt] &=4−frac{1}{2} &=16−frac{1}{4} [4pt] &=72 &=frac{63}{4}end{align*}]

Now we compute the average rate of change.

[egin{align*} ext{Average rate of change} &=dfrac{f(4)−f(2)}{4−2} [4pt] &=dfrac{frac{63}{4}-frac{7}{2}}{4-2} [4pt] &=dfrac{frac{49}{4}}{2} [4pt] &= dfrac{49}{8}end{align*}]

Exercise (PageIndex{2})

Find the average rate of change of (f(x)=x−2sqrt{x}) on the interval ([1, 9]).

Solution

(frac{1}{2})

Example (PageIndex{5}): Finding the Average Rate of Change of a Force

The electrostatic force (F), measured in newtons, between two charged particles can be related to the distance between the particles (d),in centimeters, by the formula (F(d)=frac{2}{d^2}). Find the average rate of change of force if the distance between the particles is increased from 2 cm to 6 cm.

Solution

We are computing the average rate of change of (F(d)=dfrac{2}{d^2}) on the interval ([2,6]).

[egin{align*} ext{Average rate of change }&=dfrac{F(6)−F(2)}{6−2} [4pt] &=dfrac{frac{2}{6^2}-frac{2}{2^2}}{6-2} & ext{Simplify} [4pt] &=dfrac{frac{2}{36}-frac{2}{4}}{4} [4pt] &=dfrac{-frac{16}{36}}{4} & ext{Combine numerator terms.} [4pt] &=−dfrac{1}{9} & ext{Simplify}end{align*}]

The average rate of change is (−frac{1}{9}) newton per centimeter.

Example (PageIndex{6}): Finding an Average Rate of Change as an Expression

Find the average rate of change of (g(t)=t^2+3t+1) on the interval ([0, a]). The answer will be an expression involving (a).

Solution

We use the average rate of change formula.

(egin{align*} ext{Average rate of change} &=dfrac{g(a)−g(0)}{a−0} & ext{Evaluate.} [4pt] &=dfrac{(a^2+3a+1)−(0^2+3(0)+1)}{a−0} & ext{Simplify.} [4pt] &=dfrac{a^2+3a+1−1}{a} & ext{Simplify and factor.}[4pt] &= dfrac{a(a+3)}{a} & ext{Divide by the common factor a.}[4pt] &= a+3 end{align*})

This result tells us the average rate of change in terms of a between (t=0) and any other point (t=a). For example, on the interval ([0,5]), the average rate of change would be (5+3=8).

Exercise (PageIndex{3})

Find the average rate of change of (f(x)=x^2+2x−8) on the interval ([5, a]).

Solution

(a+7)

Using a Graph to Determine Where a Function is Increasing, Decreasing, or Constant

As part of exploring how functions change, we can identify intervals over which the function is changing in specific ways. We say that a function is increasing on an interval if the function values increase as the input values increase within that interval. Similarly, a function is decreasing on an interval if the function values decrease as the input values increase over that interval. The average rate of change of an increasing function is positive, and the average rate of change of a decreasing function is negative. Figure (PageIndex{3}) shows examples of increasing and decreasing intervals on a function.

While some functions are increasing (or decreasing) over their entire domain, many others are not. A value of the input where a function changes from increasing to decreasing (as we go from left to right, that is, as the input variable increases) is called a local maximum. If a function has more than one, we say it has local maxima. Similarly, a value of the input where a function changes from decreasing to increasing as the input variable increases is called a local minimum. The plural form is “local minima.” Together, local maxima and minima are called local extrema, or local extreme values, of the function. (The singular form is “extremum.”) Often, the term local is replaced by the term relative. In this text, we will use the term local.

Clearly, a function is neither increasing nor decreasing on an interval where it is constant. A function is also neither increasing nor decreasing at extrema. Note that we have to speak of local extrema, because any given local extremum as defined here is not necessarily the highest maximum or lowest minimum in the function’s entire domain.

For the function whose graph is shown in Figure (PageIndex{4}), the local maximum is 16, and it occurs at (x=−2). The local minimum is −16 and it occurs at (x=2).

To locate the local maxima and minima from a graph, we need to observe the graph to determine where the graph attains its highest and lowest points, respectively, within an open interval. Like the summit of a roller coaster, the graph of a function is higher at a local maximum than at nearby points on both sides. The graph will also be lower at a local minimum than at neighboring points. Figure (PageIndex{5}) illustrates these ideas for a local maximum.

These observations lead us to a formal definition of local extrema.

Local Minima and Local Maxima

  • A function (f) is an increasing function on an open interval if (f(b)>f(a)) for every (a), (b) interval where (b>a).
  • A function (f) is a decreasing function on an open interval if (f(b)a).

A function (f) has a local maximum at a point (b) in an open interval ((a,c)) if (f(b)) is greater than or equal to (f(x)) for every point (x) ((x) does not equal (b)) in the interval. Likewise, (f) has a local minimum at a point (b) in ((a,c)) if (f(b)) is less than or equal to (f(x)) for every (x) ((x) does not equal (b)) in the interval.

Example (PageIndex{7}) Finding Increasing and Decreasing Intervals on a Graph

Given the function (p(t)) in Figure (PageIndex{6}), identify the intervals on which the function appears to be increasing.

Solution

We see that the function is not constant on any interval. The function is increasing where it slants upward as we move to the right and decreasing where it slants downward as we move to the right. The function appears to be increasing from (t=1) to (t=3) and from (t=4) on.

In interval notation, we would say the function appears to be increasing on the interval ((1,3)) and the interval ((4,infty)).

Analysis

Notice in this example that we used open intervals (intervals that do not include the endpoints), because the function is neither increasing nor decreasing at (t=1), (t=3), and (t=4). These points are the local extrema (two minima and a maximum).

Example (PageIndex{8}): Finding Local Extrema from a Graph

Graph the function (f(x)=frac{2}{x}+frac{x}{3}). Then use the graph to estimate the local extrema of the function and to determine the intervals on which the function is increasing.

Solution

Using technology, we find that the graph of the function looks like that in Figure (PageIndex{7}). It appears there is a low point, or local minimum, between (x=2) and (x=3), and a mirror-image high point, or local maximum, somewhere between (x=−3) and (x=−2)

.

Analysis

Most graphing calculators and graphing utilities can estimate the location of maxima and minima. Figure (PageIndex{8}) provides screen images from two different technologies, showing the estimate for the local maximum and minimum.

Based on these estimates, the function is increasing on the interval ((−infty,−2.449)) and ((2.449,infty)). Notice that, while we expect the extrema to be symmetric, the two different technologies agree only up to four decimals due to the differing approximation algorithms used by each. (The exact location of the extrema is at (pmsqrt{6}), but determining this requires calculus.)

Exercise (PageIndex{8})

Graph the function (f(x)=x^3−6x^2−15x+20) to estimate the local extrema of the function. Use these to determine the intervals on which the function is increasing and decreasing.

Solution

The local maximum appears to occur at ((−1,28)), and the local minimum occurs at ((5,−80)). The function is increasing on ((−infty,−1)cup(5,infty)) and decreasing on ((−1,5)).

Graph of a polynomial with a local maximum at (-1, 28) and local minimum at (5, -80).

Example (PageIndex{9}): Finding Local Maxima and Minima from a Graph

For the function f whose graph is shown in Figure (PageIndex{9}), find all local maxima and minima.

Solution

Observe the graph of (f). The graph attains a local maximum at (x=1) because it is the highest point in an open interval around (x=1).The local maximum is the y-coordinate at (x=1), which is 2.

The graph attains a local minimum at (x=−1) because it is the lowest point in an open interval around (x=−1). The local minimum is the y-coordinate at (x=−1), which is −2.

We will now return to our toolkit functions and discuss their graphical behavior in Figure (PageIndex{10}), Figure (PageIndex{11}), and Figure (PageIndex{12}).

.


Figure (PageIndex{12})

Use A Graph to Locate the Absolute Maximum and Absolute Minimum

There is a difference between locating the highest and lowest points on a graph in a region around an open interval (locally) and locating the highest and lowest points on the graph for the entire domain. The y-coordinates (output) at the highest and lowest points are called the absolute maximum and absolute minimum, respectively. To locate absolute maxima and minima from a graph, we need to observe the graph to determine where the graph attains it highest and lowest points on the domain of the function (Figure (PageIndex{13})).

Not every function has an absolute maximum or minimum value. The toolkit function (f(x)=x^3) is one such function.

Absolute Maxima and Minima

  • The absolute maximum of (f) at (x=c) is (f(c)) where (f(c)≥f(x)) for all (x) in the domain of (f).
  • The absolute minimum of (f) at (x=d) is (f(d)) where (f(d)≤f(x)) for all (x) in the domain of (f).

Example (PageIndex{10}): Finding Absolute Maxima and Minima from a Graph

For the function f shown in Figure (PageIndex{14}), find all absolute maxima and minima.

Solution

Observe the graph of (f). The graph attains an absolute maximum in two locations, (x=−2) and (x=2), because at these locations, the graph attains its highest point on the domain of the function. The absolute maximum is the y-coordinate at (x=−2) and (x=2), which is 16.

The graph attains an absolute minimum at x=3, because it is the lowest point on the domain of the function’s graph. The absolute minimum is the y-coordinate at x=3,which is−10.

Key Equations

  • Average rate of change: (dfrac{Delta y}{Delta x}=dfrac{f(x_2)-f(x_1)}{x_2-x_1})

Key Concepts

  • A rate of change relates a change in an output quantity to a change in an input quantity. The average rate of change is determined using only the beginning and ending data. See Example.
  • Identifying points that mark the interval on a graph can be used to find the average rate of change. See Example.
  • Comparing pairs of input and output values in a table can also be used to find the average rate of change. See Example.
  • An average rate of change can also be computed by determining the function values at the endpoints of an interval described by a formula. See Example and Example.
  • The average rate of change can sometimes be determined as an expression. See Example.
  • A function is increasing where its rate of change is positive and decreasing where its rate of change is negative. See Example.
  • A local maximum is where a function changes from increasing to decreasing and has an output value larger (more positive or less negative) than output values at neighboring input values.
  • A local minimum is where the function changes from decreasing to increasing (as the input increases) and has an output value smaller (more negative or less positive) than output values at neighboring input values.
  • Minima and maxima are also called extrema.
  • We can find local extrema from a graph. See Example and Example.
  • The highest and lowest points on a graph indicate the maxima and minima. See Example.

6.2.3.4: The Arrhenius Law - Arrhenius Plots

In 1889, Svante Arrhenius proposed the Arrhenius equation from his direct observations of the plots of rate constants vs. temperatures:

The activation energy, Ea, is the minimum energy molecules must possess in order to react to form a product. The slope of the Arrhenius plot can be used to find the activation energy. The Arrhenius plot can also be used by extrapolating the line back to the y-intercept to obtain the pre-exponential factor, A. This factor is significant because A=p×Z, where p is a steric factor and Z is the collision frequency. The pre-exponential, or frequency, factor is related to the amount of times molecules will hit in the orientation necessary to cause a reaction. It is important to note that the Arrhenius equation is based on the collision theory. It states that particles must collide with proper orientation and with enough energy. Now that we have obtained the activation energy and pre-exponential factor from the Arrhenius plot, we can solve for the rate constant at any temperature using the Arrhenius equation.

The Arrhenius plot is obtained by plotting the logarithm of the rate constant, k, versus the inverse temperature, 1/T. The resulting negatively-sloped line is useful in finding the missing components of the Arrhenius equation. Extrapolation of the line back to the y-intercept yields the value for ln A. The slope of the line is equal to the negative activation energy divided by the gas constant, R. As a rule of thumb in most biological and chemical reactions, the reaction rate doubles when the temperature increases every 10 degrees Celsius.

Looking at the Arrhenius equation, the denominator of the exponential function contains the gas constant, R, and the temperature, T. This is only the case when dealing with moles of a substance, because R has the units of J/molK. When dealing with molecules of a substance, the gas constant in the dominator of the exponential function of the Arrhenius equation is replaced by the Boltzmann constant, kB. The Boltzmann constant has the units J/K. At room temperature, k​BT, is the available energy for a molecule at 25 C or 273K, and is equal to approximately 200 wave numbers.

It is important to note that the decision to use the gas constant or the Boltzmann constant in the Arrhenius equation depends primarily on the canceling of the units. To take the inverse log of a number, the number must be unitless. Therefore all the units in the exponential factor must cancel out. If the activation energy is in terms of joules per moles, then the gas constant should be used in the dominator. However, if the activation energy is in unit of joules per molecule, then the constant, K, should be used.


Key Equations

  • A rate of change relates a change in an output quantity to a change in an input quantity. The average rate of change is determined using only the beginning and ending data.
  • Identifying points that mark the interval on a graph can be used to find the average rate of change.
  • Comparing pairs of input and output values in a table can also be used to find the average rate of change.
  • An average rate of change can also be computed by determining the function values at the endpoints of an interval described by a formula.
  • The average rate of change can sometimes be determined as an expression.
  • A function is increasing where its rate of change is positive and decreasing where its rate of change is negative.
  • A local maximum is where a function changes from increasing to decreasing and has an output value larger (more positive or less negative) than output values at neighboring input values.
  • A local minimum is where the function changes from decreasing to increasing (as the input increases) and has an output value smaller (more negative or less positive) than output values at neighboring input values.
  • Minima and maxima are also called extrema.
  • We can find local extrema from a graph.
  • The highest and lowest points on a graph indicate the maxima and minima.

Using a Graph to Determine Where a Function is Increasing, Decreasing, or Constant

As part of exploring how functions change, we can identify intervals over which the function is changing in specific ways. We say that a function is increasing on an interval if the function values increase as the input values increase within that interval. Similarly, a function is decreasing on an interval if the function values decrease as the input values increase over that interval. The average rate of change of an increasing function is positive, and the average rate of change of a decreasing function is negative. [link] shows examples of increasing and decreasing intervals on a function.

The function f ( x ) = x 3 − 12 x f ( x ) = x 3 − 12 x is increasing on ( − ∞ , − 2 ) ∪ ​ ​ ( 2 , ∞ ) ( − ∞ , − 2 ) ∪ ​ ​ ( 2 , ∞ ) and is decreasing on ( − 2 , 2 ) . ( − 2 , 2 ) .

While some functions are increasing (or decreasing) over their entire domain, many others are not. A value of the input where a function changes from increasing to decreasing (as we go from left to right, that is, as the input variable increases) is called a local maximum . If a function has more than one, we say it has local maxima. Similarly, a value of the input where a function changes from decreasing to increasing as the input variable increases is called a local minimum . The plural form is “local minima.” Together, local maxima and minima are called local extrema , or local extreme values, of the function. (The singular form is “extremum.”) Often, the term local is replaced by the term relative. In this text, we will use the term local.

Clearly, a function is neither increasing nor decreasing on an interval where it is constant. A function is also neither increasing nor decreasing at extrema. Note that we have to speak of local extrema, because any given local extremum as defined here is not necessarily the highest maximum or lowest minimum in the function’s entire domain.

For the function whose graph is shown in [link], the local maximum is 16, and it occurs at x = −2. x = −2. The local minimum is −16 −16 and it occurs at x = 2. x = 2.

To locate the local maxima and minima from a graph, we need to observe the graph to determine where the graph attains its highest and lowest points, respectively, within an open interval. Like the summit of a roller coaster, the graph of a function is higher at a local maximum than at nearby points on both sides. The graph will also be lower at a local minimum than at neighboring points. [link] illustrates these ideas for a local maximum.

Definition of a local maximum


Climate Change Indicators: Tropical Cyclone Activity

This indicator examines the frequency, intensity, and duration of hurricanes and other tropical storms in the Atlantic Ocean, Caribbean, and Gulf of Mexico.

  • Figure 1. Number of Hurricanes in the North Atlantic, 1878–2020

This graph shows the number of hurricanes that formed in the North Atlantic Ocean each year from 1878 to 2020, along with the number that made landfall in the United States. The orange curve shows how the total count in the green curve can be adjusted to attempt to account for the lack of aircraft and satellite observations in early years. All three curves have been smoothed using a five-year average, plotted at the middle year. The most recent average (2016–2020) is plotted at 2018.

Data source: NOAA, 2021 4 Vecchi and Knutson, 2011 5
Web update: April 2021

This figure shows total annual Accumulated Cyclone Energy (ACE) Index values, which account for cyclone strength, duration, and frequency, from 1950 through 2020. The National Oceanic and Atmospheric Administration has defined “near normal,” “above normal,” and “below normal” ranges based on the distribution of ACE Index values over the 30 years from 1981 to 2010.

Data source: NOAA, 2021 6
Web update: April 2021

This figure presents annual values of the Power Dissipation Index (PDI), which accounts for cyclone strength, duration, and frequency. Tropical North Atlantic sea surface temperature trends are provided for reference. Note that sea surface temperature is measured in different units, but the values have been plotted alongside the PDI to show how they compare. The lines have been smoothed using a five-year weighted average, plotted at the middle year. The most recent average (2015–2019) is plotted at 2017.

Data source: Emanuel, 2021 7
Web update: April 2021

Key Points

  • Since 1878, about six to seven hurricanes have formed in the North Atlantic every year. Roughly two per year make landfall in the United States. The total number of hurricanes (particularly after being adjusted for improvements in observation methods) and the number reaching the United States do not indicate a clear overall trend since 1878 (see Figure 1).
  • According to the total annual ACE Index, cyclone intensity has risen noticeably over the past 20 years, and eight of the 10 most active years since 1950 have occurred since the mid-1990s (see Figure 2). Relatively high levels of cyclone activity were also seen during the 1950s and 1960s.
  • The PDI (see Figure 3) shows fluctuating cyclone intensity for most of the mid- to late 20 th century, followed by a noticeable increase since 1995 (similar to the ACE Index). These trends are shown with associated variations in sea surface temperature in the tropical North Atlantic for comparison (see Figure 3).
  • Despite the apparent increases in tropical cyclone activity in recent years, shown in Figures 2 and 3, changes in observation methods over time make it difficult to know whether tropical storm activity has actually shown an increase over time. 3

Background

Hurricanes, tropical storms, and other intense rotating storms fall into a general category called cyclones. There are two main types of cyclones: tropical and extratropical (those that form outside the tropics). Tropical cyclones get their energy from warm tropical oceans. Extratropical cyclones get their energy from the jet stream and from temperature differences between cold, dry air masses from higher latitudes and warm, moist air masses from lower latitudes.

This indicator focuses on tropical cyclones in the Atlantic Ocean, Caribbean, and Gulf of Mexico. Tropical cyclones are most common during the “hurricane season,” which runs from June through November. The effects of tropical cyclones are numerous and well known. At sea, storms disrupt and endanger shipping traffic. When cyclones encounter land, their intense rains and high winds can cause severe property damage, loss of life, soil erosion, and flooding. The associated storm surge—the large volume of ocean water pushed toward shore by the cyclone’s strong winds—can cause severe flooding, erosion, and destruction.

Climate change is expected to affect tropical cyclones by increasing sea surface temperatures, a key factor that influences cyclone formation and behavior. The U.S. Global Change Research Program and the Intergovernmental Panel on Climate Change project that tropical cyclones will become more intense over the 21 st century, with higher wind speeds and heavier rains. 1,2

About the Indicator

Records of tropical cyclones in the Atlantic Ocean have been collected since the 1800s. The most reliable long-term records focus on hurricanes, which are the strongest category of tropical cyclones in the Atlantic, with wind speeds of at least 74 miles per hour. This indicator uses historical data from the National Oceanic and Atmospheric Administration to track the number of hurricanes per year in the North Atlantic (north of the equator) and the number reaching the United States since 1878. Some hurricanes over the ocean might have been missed before the start of aircraft and satellite observation, so scientists have used other evidence, such as ship traffic records, to estimate the actual number of hurricanes that might have formed in earlier years.

This indicator also looks at the Accumulated Cyclone Energy (ACE) Index and the Power Dissipation Index (PDI), which are two ways of monitoring the frequency, strength, and duration of tropical cyclones based on wind speed measurements.

Every cyclone has an ACE Index value, which is a number based on the maximum wind speed measured at six-hour intervals over the entire time that the cyclone is classified as at least a tropical storm (wind speed of at least 39 miles per hour). Therefore, a storm’s ACE Index value accounts for both strength and duration. The National Oceanic and Atmospheric Administration calculates the total ACE Index value for an entire hurricane season by adding the values for all named storms, including subtropical storms, tropical storms, and hurricanes. The resulting annual total accounts for cyclone strength, duration, and frequency. For this indicator, the index has been converted to a scale where 100 equals the median value (the midpoint) over a base period from 1981 to 2010. The thresholds in Figure 2 define whether the ACE Index for a given year is close to normal, significantly above normal, or significantly below normal.

Like the ACE Index, the PDI is based on measurements of wind speed, but it uses a different calculation method that places more emphasis on storm intensity. This indicator shows the annual PDI value, which represents the sum of PDI values for all named storms during the year.

About the Data

Indicator Notes

Over time, data collection methods have changed as technology has improved. For example, wind speed collection methods have evolved substantially over the past 60 years, while aircraft reconnaissance began in 1944 and satellite tracking around 1966. Figure 1 shows how older hurricane counts have been adjusted to attempt to account for the lack of aircraft and satellite observations. Changes in data gathering technologies could substantially influence the overall patterns in Figures 2 and 3. The effects of these changes on data consistency over the life of the indicator would benefit from additional research.

While Figures 2 and 3 cover several different aspects of tropical cyclones, there are other important factors not covered here, including the size of each storm, the amount of rain, and the height of the storm surge. The reason for the recent divergence between cyclone activity and sea surface temperature in Figure 3 has not been identified conclusively, but it may relate to other factors that influence the formation of storms, such as the difference in wind speeds at different levels in the atmosphere (called vertical wind shear). 8


Relative Economic Strength

As the name may suggest, the relative economic strength approach looks at the strength of economic growth in different countries in order to forecast the direction of exchange rates. The rationale behind this approach is based on the idea that a strong economic environment and potentially high growth are more likely to attract investments from foreign investors. And, in order to purchase investments in the desired country, an investor would have to purchase the country's currency—creating increased demand that should cause the currency to appreciate.

This approach doesn't just look at the relative economic strength between countries. It takes a more general view and looks at all investment flows. For instance, another factor that can draw investors to a certain country is interest rates. High interest rates will attract investors looking for the highest yield on their investments, causing demand for the currency to increase, which again would result in an appreciation of the currency.

Conversely, low interest rates can also sometimes induce investors to avoid investing in a particular country or even borrow that country's currency at low interest rates to fund other investments. Many investors did this with the Japanese yen when the interest rates in Japan were at extreme lows. This strategy is commonly known as the carry trade.

The relative economic strength method doesn't forecast what the exchange rate should be, unlike the PPP approach. Rather, this approach gives the investor a general sense of whether a currency is going to appreciate or depreciate and an overall feel for the strength of the movement. It is typically used in combination with other forecasting methods to produce a complete result.


Graph Key Trends

Once you’ve decided the general boundary of your system—what you’ll focus on, and what you won’t—it is useful to draw simple graphs to describe the changing behaviors of the key variables in the system.

These graphs, called behavior-over-time graphs, encourage dynamic rather than static thinking, shifting focus from single events to changing patterns of behavior. This process encourages deeper thinking about what is changing and over what time frame. For students, graphs also act as another, visual way to communicate their thoughts and ideas.

When creating a behavior-over-time graph remember that the focus is on behavior changing over time, therefore, the x or horizontal axis must represent time. You can use any meaninful measurement: seconds, days, weeks, months, decades, and so on.

The behavior that is changing is shown on the y or vertical axis. You can plot any variable that increases or decreases. Often there's a standard measure you can use, although you may want to plot a variable that is not easily measured, such as a character's happiness or degree of team spirit. This will require use of a scale, such a 0-10, or you may label with y-axis with appropriate adjectives, such as "poor" at the bottom, "average" in the middle, and "superior" at the top.

As you look at behavior over time, you may find that the change is linear or exponential (upward or downward), or the pattern may oscillate. The question you would next consider is what set of interrelationships may be driving the behavior you've described in the graph.


Interest Rates And Market Behavior: 5, 10, And 20 Years Ago

Yesterday’s post on jobs made some interesting points about the relative performance of the economy today and in previous decades, highlighting both strengths and weaknesses of the current recovery.

A look at financial figures over the same time periods offers a different but equally interesting set of observations.

DG Value Partners II was up 2.28% net for June, bringing its year-to-date return to 20.7% net. The fund is managed by Dov Gertzulin and focuses on event-driven value opportunities in the middle market, looking for securities that are priced below what management believes to be their intrinsic value. Q2 2021 hedge fund letters, conferences Read More

I lined up interest rates and stock values and valuations because financial assets are inextricably linked. The changes can provide some useful information about how normal (or not) current figures are, and what that might mean over the next couple of decades. Mortgage rates and housing offer a similar and supporting look at interest rates and asset prices.

The relationship between interest rates and stocks

U.S. Treasury rates have dropped consistently, despite some volatility, for 25 years. Think about that: very few trends last so long. Rates today are less than a quarter of rates in 1990. Mortgage rates have dropped almost as much, in terms of percentage points, but only to just over a third, rather than less than a quarter, of 1990 levels.

Stocks are typically valued with respect to interest rates at lower interest rates, stocks are worth more. We can conclude that the decline in interest rates should have helped push stock prices higher, and we see exactly that.

The difference is explained by the rise in stock valuations, from about 16 to around 20 times earnings. Much of that rise in valuations can be attributed to the decline in interest rates. You can see the consistency of this change in the 2005–2015 numbers as compared with the 1990 and 1995 numbers.

The reason is simple: the implied return, from earnings, on a stock purchase is the reciprocal of the P/E ratio, or E/P. The P/E ratio of 15.69 in 1990 represents an earnings return of 6.3 percent, which is actually below the Treasury rate at that time. The current P/E ratio of 20.21 equates to a return of 4.95 percent on earnings, which is well above the current Treasury rate. The significantly lower interest rates of the past 10 years should result in higher valuation levels than in the 1990s, and that is what we see.

Arguably, with interest rates dropping by about 6.5 percent, you might even have expected valuations to increase by much more, and we saw just that in 2000. Even there, though, the implied return of 3.4 percent is about 3 percent less than the 6.3 percent of 1990. We saw the same behavior in mortgage rates and housing prices, with declining rates pushing up prices to very high levels, and then to a collapse, and then back up.

The fact that two very different asset classes showed the same behavior, for essentially the same reasons, indicates that interest rates are indeed a fundamental determinant of market behavior, in accord with what theory would suggest.

What does this mean for the future?

If rates were to increase—which at some point is very probable, verging on certain depending on the time frame—valuations could reasonably be expected to adjust back down to make earnings-based returns more consistent with the higher rates.

There is no reason to expect this will happen immediately, as rates may remain low for some time. Equally, if rates adjust while earnings continue to rise, we may see the valuation adjustment occur without a stock market correction. Either way, this is another pending headwind for the market, highlighted by the still very favorable interest rate conditions we now have.


Using Data Analytics to Change Behavior

The remotely hosted, advanced data-analytics application played a critical role in the IV Medication Safety Improvement Initiative (Box 1). The application reported DERS usage for each infusion, which made compliance with selecting the medication from the drug library easier to see and allowed the nursing staff to more easily monitor their performance. Authorized staff could access the smart pump data anywhere, anytime from an appropriate digital device. Retrospective data aggregated from the hospital’s smart pump system could be easily viewed on the application’s “dashboard” display. Staff could easily review, report, create graphs and slides, and share important information with nursing directors, educators, senior management, and—most importantly—bedside clinicians.

Reporting DERS usage rates by unit was challenging, because the pumps were mobile and moved regularly around the hospital. However, rates could be tracked by patient profile. Medical/surgical nurses, for example, could easily see how they compared to critical care nurses. Having the nurses see the data on a regular basis also sparked a spirit of competition between departments, which further increased motivation and nursing engagement in the initiative.

Provide frequent, fresh communication

Report distribution started out weekly, then changed first to biweekly and later to monthly as nurses’ use of the DERS drug library began to increase (Table 1). At first, IV Safety Improvement bulletins were distributed to the chief nursing officer, nursing directors, and nursing educators, but not to individual nurses (who were already receiving a great deal of email). As awareness increased and nurses became more interested and involved, individuals were added to the email distribution list.

Continuously updating the before-and-after data kept the information fresh, kept nurses interested, showed their successes, and fostered a spirit of competition. As nurses saw improvements reflected in the changing data, they became increasingly involved in pharmacy-nursing collaboration (Box 2).

IV medication safety improvements

Using data to help drive improvement resulted in steady increases in the use of the DERS drug library to select the medication to be infused (Figures 1, 3, and 4).

The IV Medication Safety Improvement Initiative also:

  • Provided nursing and physician education regarding the importance of using DERS
  • Provided a mechanism for nursing to inform pharmacy of any discrepancies between the drug library and actual practice
  • Eliminated reported discrepancies between the drug library and actual practice
  • Increased pharmacy-nursing collaboration
  • Identified drug library entries that needed to be added or updated
  • Increased nursing engagement
  • Increased use of DERS safeguards by selecting the medication to be infused from the drug library
  • Helped strengthen ORMC’s culture of safety

Every year, ORMC departments can submit one or more projects for an enterprisewide quality and patient safety award. In 2015, the IV Safety Improvement initiative was awarded top honors at the GHVHS Quality and Patient Safety Awards, an important validation of the staff’s concerted, innovative efforts and results.

Considerable progress has been made in putting DERS and continuous quality improvement at the forefront of medication safety awareness. Nonetheless, there is more work to be done. Pharmacy presentations at nursing education sessions and monthly reports on DERS usage continue. Multiple communication channels are still available. Email has emerged as the most commonly used means of communication.

In the future, staff will also analyze data on “good catches,” alerts, and overrides as another way to identify needed improvement in nursing practice and/or smart pump drug libraries. The smart pump data and remotely hosted data-analytics application can also be used to help staff demonstrate cost avoidance, return on investment achieved through a reduction in adverse drug events, and the hospital’s compliance with certain requirements of The Joint Commission. Finally, optimizing the drug libraries helps pave the way for future implementations of smart pump-EMR interoperability.

Eliminating mismatches in the smart pump drug library helps drive use of DERS

Education, easy-to-use reporting systems, face-to-face discussions, and ongoing communication with frontline nursing are effective ways to educate the nursing staff and discover inadequacies in the drug library

Responding to nursing feedback in a timely manner keeps staff engaged

Frequent communication of data showing progress is essential

Data reports help celebrate nurse accomplishments and further strengthen nursing engagement in the IV medication safety improvement efforts

The ISMP points out that, like seat belts, the safety features of any safety technology can be bypassed, despite various mandates requiring their use. “Thus, it is not enough to purchase smart pumps, program the library to enable the technology, distribute the pumps, educate users, and hope that the dose-checking feature will always be used. A culture of safety must exist that drives clinicians to avoid bypassing such a safety feature, or to report conditions that encourage workarounds so they can be remedied” (ISMP, 2007).

At ORMC, before-and-after data from the analytics reports provided “something new” that helped to increase staff awareness of the need to always select the medication to be infused from the DERS drug library, motivate staff to report conditions that could encourage workarounds, and increases nurses’ engagement in the medication safety improvement process. Nurses continue to work with pharmacy to keep the drug library up to date. They know that their feedback is important and have become increasingly engaged in the compliance-improvement process. They recognize more fully that smart pump safety features can save lives and have to be used. Continuously fine-tuning the drug library in the continuing quality loop helps ensure that bedside clinicians have the latest data set for the safest clinical practice—and that the right thing to do is the easy thing to do.

Note: We would like to express our appreciation to BD and Sally Graver for helping to ensure accuracy and completeness during manuscript development.

  1. a.AlarisSystem, with Guardrails Suite MX software, BD. Franklin Lakes, NJ.
  2. b. Knowledge Portal for Infuse on Technologies, BD. Franklin Lakes, NJ.

NicoleKarchner,PharmD, was the clinical pharmacy manager at Orange Regional Medical Center in Middletown, New York from 2009 to 2016. She is now the director of pharmacy management for Crystal Run Health Plans. She can be contacted at [email protected]

REFERENCES

American Society of Health-System Pharmacists. (2008). Proceedings of a summit on preventing patient harm and death from IV medication errors. Am J Health-Syst Pharm, 65(24), 2367–2379.

Fields, M., & Peterman, J. (2005). Intravenous medication safety system averts high-risk medication errors and provides actionable data. Nurs Admin Quar, 29(1), 78–87.

Institute for Safe Medication Practices. (2007, April 19). Smart pumps are not smart on their own. ISMP Medication Safety Alert! Retrieved March 2, 2017 from http://www.ismp.org/newsletters/acutecare/articles/20070419.asp.

Maddox, R., Danello, S., Williams, G. K., & Fields, M. (2008). Intravenous infusion safety initiative: Collaboration, evidence-based best practices, and “smart” technology help avert high-risk adverse drug events and improve patient outcomes. In K. Henriksen, J. B. Battles, M. A. Keyes, & M. L. Grady (Eds.), Advances in Patient Safety: New Directions and Alternative Approaches, Vol. 4 (pp. 143–156). Rockville, MD: Agency for Healthcare Research and Quality.

The National Institute for Occupational Safety and Health (NIOSH). (2016, July 16). Hierarchy of Controls. Retrieved March 2, 2017 from https://www.cdc.gov/niosh/topics/hierarchy/default.html.

Orange Regional Medical Center (ORMC). (2015). IV Medication Safety Improvement Initiative, data on file.

Williams, C. K., & Maddox, R.R. (2005). Implementation of an I.V. medication safety system. Am J Health-Syst Pharm, 62(5), 530–536.

Wilson, K., & Sullivan, M. (2004). Preventing medication errors with smart infusion technology. Am J Health-Syst Pharm, 61(2), 177–183.


Why is it important to use a graph?

Once you have collected data from observation sessions, it is important to organize the information in such a way that it is easy to interpret. It can be difficult to see patterns by simply looking at long lists of numbers or reading data collection sheets across different days. Graphs can provide quick and easy visual summaries that allow teachers to determine patterns of behavior, evaluate the results of new teaching strategies, and establish whether or not interventions are having the desired effects. This information can then be used to provide students with feedback on their performance.

What type of graph should be used?

There are several different types of graphs that can be used to represent data including line graphs, bar graphs, pie charts, or scatter plots. The most common type of graph used to evaluate behavioral data is the line graph. A line graph shows individual data points connected by line, creating a path. Over time, this path can show a visual pattern that helps you evaluate the overall directions of a behavior.

Another common graph used is referred to as a bar graph. A bar graph is often used when portions of a whole are being represented or when reporting a percentage. The bar graph focuses on the height of the data rather than the trend in the data, and is most often used when nonconsecutive data points are being evaluated. This is a particularly useful method when comparing information across individuals, settings, or situations.

Pie charts may be useful when representing portions of a whole. For instance, it might be helpful to create a pie chart indicating the amount of time a student spends actively engaged in activities.

Finally, scatter plots are used when a variety of observations or measures have been taken that are not necessarily collected consecutively. For example, a scatter plot may be used to represent the scores obtained by a class on a standardized achievement test. In this type of graph, each data point is independent. However, depicting the data in this fashion may allow one to see the performance of each person compared to the rest of the group.

Example of a scatter plot showing Mrs. Jones's class grades on a standardized academic achievement test:

What are the important elements of a line graph?

It is important to know the basic elements of a line graph because it is the most common type of graph used to evaluate behavioral data.

The Horizontal Axis (X-Axis) and Vertical Axis (Y-Axis)

Data are presented in a graph within a boundary containing a horizontal line and a vertical line that are referred to as axes. The horizontal axis is called the x-axis, and the vertical axis is referred to as the y-axis. These two axes meet at the bottom left side of the page. The horizontal axis represents the passage of time. The vertical axis represents the numerical property of the behavior being measured. The numbers on both axes are usually divided into equal intervals. The scale of the y-axis can be an important variable when interpreting graphs. If the scale is set too high or too low, the changes in behavior will look much bigger or smaller in appearance, and this might be misleading. In most graphs, the x-axis (representing time) is longer than the y-axis, especially if repeated observations of the behavior have been made.

Points are usually plotted on a graph by placing a mark where the lines of the behavior's value (y-axis) and that of the behavior occurrence (x-axis) intersect. Each time an observation is conducted, a point can be plotted on the graph. Points are often connected to each other by lines.

Each time there is a change that may have an impact on behavior, a vertical line is drawn beginning on the x-axis, passing between the data points represented on the graph. Data points on either side of the condition line are not connected to each other. A condition change line can denote the move from baseline to intervention or from one intervention to another. Condition lines can also be used to denote other changes that may impact the behavior (e.g., sickness, a change in classroom, a change in teacher or supervisor). However, if the changes are temporary (e.g., presence of a substitute teacher, illness, father gone on a trip), arrows rather than condition lines, may be used to mark the beginning and end of these temporary factors.

Each condition in a graph must be labeled with a short descriptive phrase or word placed at the top of the graph above the data. This descriptive phase or word represents a condition (for instance, the baseline or intervention) that is implemented during the time period represented in the graph.

How do you use a graph to inspect the data gathered?

A visual analysis of the data in a line graph helps to answer two types of questions:

  • Are there meaningful changes in the behavior over time?
  • To what extent can that change in behavior be attributed to the teaching strategy or behavioral intervention that was introduced?

Although there are no formal rules for the visual analysis of graphs, there are certain properties that are common to all behavioral data. The properties within and across conditions that are examined visually include variability, level, and trends in the data.

Variability is the extent to which a behavior changes from one data point to the next. If the behavior does not show much variability, it may not be necessary to collect as much data since the behavior is considered more stable and chances are that the behavior will remain at this level is high. On the other hand, if a behavior shows a lot of variability, additional data should be collected before making any changes. This will allow one to better determine whether or not the changes in behavior are due to the intervention.

The level of a behavior is the increase or decrease in a behavior from the beginning to the end of a condition. The bigger the level of change, the more powerful the effect of the intervention. For instance, the greater the magnitude and direction of change that has occurred from baseline to intervention, the more likely that the intervention is effective. Sometimes a line representing the average of the data points within a condition is drawn on the graph to help show the change in level. This means line can be useful when the data are somewhat variable. In the figure below, the mean level line for the duration of tantrums shows that there isn't much difference between baseline and treatment, indicating that the treatment may not be too effective.

Trend refers to the direction the data points on a graph are heading. A steep slant upwards shows a strong increasing trend while a slant downward indicates the behavior is decreasing. Looking at the steepness and direction of the data points can also helps you make decisions about the effectiveness of an intervention. Before moving to a new condition, the trend in each phase is evaluated. It is important to make sure that the trend is stable before moving from baseline to intervention or from intervention to a new intervention. For example, if the baseline trend is steadily decreasing or increasing it is considered to be in the process of changing. If the intervention is begun during an increasing or decreasing trend, it is more difficult to know whether the change in behavior is due to the intervention since the behavior was in the process of changing prior to the intervention.


Watch the video: Rates of Change and Behavior of Graphs (October 2021).