2.4. Stability and Stiffness#

In the past when you’ve talked about stability, it has likely been regarding the stability of a system. Stable systems are those will well-behaved exact solutions, meaning they do not grow unbounded. In engineering we mostly focus (or want!) stable systems, although there are some interesting unstable systems such as those involving resonance, nonlinear dynamics, or chaos—generally we want to know when that happens so we can prevent it.

We can also define the stability of a numerical scheme, which is when the numerical solution exhibits unphysical behavior. In other words, it blows up.

For example, let’s consider the relatively simple 1st-order ODE

(2.19)#\[\begin{equation} \frac{dy}{dt} = -3 y \end{equation}\]

with the initial condition \(y(0) = 1\). As we will see, this ODE can cause explicit numerical schemes to become unstable, and thus it is a stiff ODE. (Note that we can easily obtain the exact solution for this problem, which is \(y(t) = e^{-3 t}\).)

Let’s try solving this with the Forward Euler method, integrating over \(0 \leq t \leq 10\), for a range of time-step size values: \(\Delta t = 0.1, 0.25, 0.5, 0.75\):

# import libraries for numerical functions and plotting
import numpy as np
import matplotlib.pyplot as plt
# these lines are only for helping improve the display
import matplotlib_inline.backend_inline
matplotlib_inline.backend_inline.set_matplotlib_formats('pdf', 'png')
plt.rcParams['figure.dpi']= 300
plt.rcParams['savefig.dpi'] = 300
# dydt to integrate
f = lambda t,y: -3*y

# we'll create a simple function to do forward Euler
def forward_euler(t_end, y0, dt):
    '''Simple function to perform Forward Euler iteration'''
    time = np.arange(0, t_end+0.0001, dt)
    y = np.zeros_like(time)
    y[0] = y0
    for idx, t in enumerate(time[:-1]):
        y[idx+1] = y[idx] + dt * f(t, y[idx])
    return time, y

# 4 rows, 1 column
fig, axes = plt.subplots(4, 1)

# initial condition
y0 = 1
t_end = 20

dt = 0.1
time, y = forward_euler(t_end, y0, dt)
axes[0].plot(time, y)
axes[0].grid(True)
axes[0].set_title(f'dt = {dt: 0.2f}')

dt = 0.25
time, y = forward_euler(t_end, y0, dt)
axes[1].plot(time, y)
axes[1].grid(True)
axes[1].set_title(f'dt = {dt: 0.2f}')

dt = 0.5
time, y = forward_euler(t_end, y0, dt)
axes[2].plot(time, y)
axes[2].grid(True)
axes[2].set_title(f'dt = {dt: 0.2f}')

dt = 0.75
time, y = forward_euler(t_end, y0, dt)
axes[3].plot(time, y)
axes[3].grid(True)
axes[3].set_title(f'dt = {dt: 0.2f}')

plt.tight_layout()
plt.show()
../../_images/26c546fbffd355d34d9c5f77008e324dc6bcd6bd3a802a527dcf081a7dafd3df.png

At the smaller step sizes, \(\Delta t = 0.1\) and \(\Delta t = 0.25\), we see that the solution is well-behaved. But, when we increase \(\Delta t\) to 0.5, we see some instability that goes away with time. Then, when we increase \(\Delta t\) to 0.75, the solution eventually blows up, leading to error much larger than what we should expect based on the method’s order of accuracy (first) and the step size value.

Compare this behavior to that for the ODE

(2.20)#\[\begin{equation} \frac{dy}{dt} = e^{-t} \end{equation}\]

which is non-stiff:

# dydt to integrate
f = lambda t,y: np.exp(-t)

# 4 rows, 1 column
fig, axes = plt.subplots(4, 1)

# initial condition
y0 = 1
t_end = 10

dt = 0.1
time, y = forward_euler(t_end, y0, dt)
axes[0].plot(time, y)
axes[0].grid(True)
axes[0].set_title(f'dt = {dt: 0.2f}')

dt = 0.25
time, y = forward_euler(t_end, y0, dt)
axes[1].plot(time, y)
axes[1].grid(True)
axes[1].set_title(f'dt = {dt: 0.2f}')

dt = 0.5
time, y = forward_euler(t_end, y0, dt)
axes[2].plot(time, y)
axes[2].grid(True)
axes[2].set_title(f'dt = {dt: 0.2f}')

dt = 0.75
time, y = forward_euler(t_end, y0, dt)
axes[3].plot(time, y)
axes[3].grid(True)
axes[3].set_title(f'dt = {dt: 0.2f}')

plt.tight_layout()
plt.show()
../../_images/9426436e0efffe602b990b3c6eafb7fd30142a702e59bc5e66460d574ec93a80.png

In this case, we see that the solution remains well-behaved even for larger time-step sizes, and the error matches the expected order based on the method and step-size value.

In general numerical schemes can be:

  • unstable: the scheme blows up for any choice of parameters

  • conditionally stable: the scheme is stable for a particular choice of parameters (for example, \(\Delta t\) is less than some threshold

  • unconditionally stable: the scheme is always stable

Schemes may be stable for some problem/system and not for another, and vice versa.

Stability is related to robustness of a method, which is generally a tradeoff between complexity and computational cost. The choice of method and solution strategy depends on what you want, and how long you can wait for it. In general, we almost always want to use the largest \(\Delta t\) allowable.

Rather than reducing \(\Delta t\) to avoid stability problems, we can also use a method that is unconditionally stable, such as the Backward Euler method.

2.4.1. Backward Euler method#

The Backward Euler method is very similar to the Forward Euler method, except in one way: it uses the slope at the next time step:

(2.21)#\[\begin{equation} \left(\frac{dy}{dx}\right)_{i+1} \approx \frac{y_{i+1} - y_i}{\Delta x} \end{equation}\]

Then, the resulting recursion formula is

(2.22)#\[\begin{equation} y_{i+1} = y_i + \Delta x \left(\frac{dy}{dx}\right)_{i+1}, \text{ or} \\ y_{i+1} = y_i + \Delta x \, f(x_{i+1}, y_{i+1}) \end{equation}\]

where \(f(x,y) = dy/dx\).

Notice that this recursion formula cannot be directly solved, because \(y_{i+1}\) shows up on both sides. This is an implicit method, where all the other methods we have covered (Forward Euler, Heun’s, Midpoint, and 4th-order Runge-Kutta) are explicit. Implicit methods require more work to actually implement.

2.4.1.1. Backward Euler example#

For example, consider the problem

(2.23)#\[\begin{equation} \frac{dy}{dx} = f(x,y) = 8 e^{-x} (1+x) - 2y \end{equation}\]

To actually solve this problem with the Backward Euler method, we need to incorporate the derivative function \(f(x,y)\) into the recursion formula and solve for \(y_{i+1}\):

(2.24)#\[\begin{align} y_{i+1} &= y_i + \Delta x \, f(x_{i+1}, y_{i+1}) \\ y_{i+1} &= y_i + \Delta x \left[ 8 e^{-x_{i+1}} (1 + x_{i+1}) - 2 y_{i+1} \right] \\ y_{i+1} &= y_i + 8 e^{-x_{i+1}} (1 + x_{i+1}) \Delta x - 2 \Delta x \, y_{i+1} \\ y_{i+1} + 2 \Delta x \, y_{i+1} &= y_i + 8 e^{-x_{i+1}} (1 + x_{i+1}) \Delta x \\ y_{i+1} &= \frac{ y_i + 8 e^{-x_{i+1}} (1 + x_{i+1}) \Delta x }{ 1 + 2 \Delta x } \end{align}\]

Now we have a useable recursion formula that we can use to solve this problem. Let’s use the initial condition \(y(0) = 1\), the domain \(0 \leq x \leq 7\), and \(\Delta x = 0.2\).

dx = 0.2

x_vals = np.arange(0, 7.01, dx)
y_vals = np.zeros_like(x_vals)
y_vals[0] = 1

# Backward Euler loop
for idx, x in enumerate(x_vals[:-1]):
    y_vals[idx+1] = (
        y_vals[idx] + 
        8*np.exp(-x_vals[idx+1])*(1 + x_vals[idx+1])*dx
        ) / (1 + 2*dx)

x_exact = np.linspace(0, 7)
y_exact = np.exp(-2*x_exact)*(8*x_exact*np.exp(x_exact) + 1)

plt.plot(x_exact, y_exact, label='Exact solution')
plt.plot(x_vals, y_vals, 'o--', label='Backward Euler solution')
plt.legend()
plt.grid(True)
plt.show()
../../_images/19ab034a7a8f5cd6a369975346797658c668301681b0d3102ca0f4cfc28bb65b.png

This matches nearly what we saw with the Forward Euler method before—Backward Euler is also a first-order method, so the global error should be proportional to \(\Delta x\).

Let’s now return to the stiff ODE \(y^{\prime} = -3 y\), and see how the Backward Euler method does. First, we need to obtain our useable recursion formula:

(2.25)#\[\begin{align} y_{i+1} &= y_i + \Delta t \, f(t_{i+1}, y_{i+1}) \\ y_{i+1} &= y_i + \Delta t \, \left( -3 y_{i+1} \right) \\ y_{i+1} + 3 y_{i+1} \Delta t &= y_i \\ y_{i+1} &= \frac{y_i}{1 + 3 \Delta t} \end{align}\]
# 4 rows, 1 column
fig, axes = plt.subplots(4, 1)
y0 = 1

dt = 0.1
time = np.arange(0, 10.001, dt)
y = np.zeros_like(time)
y[0] = y0
for idx, t in enumerate(time[:-1]):
    y[idx+1] = y[idx] / (1 + 3*dt)

axes[0].plot(time, y)
axes[0].grid(True)
axes[0].set_title(f'dt = {dt: 0.2f}')

dt = 0.25
time = np.arange(0, 10.001, dt)
y = np.zeros_like(time)
y[0] = y0
for idx, t in enumerate(time[:-1]):
    y[idx+1] = y[idx] / (1 + 3*dt)

axes[1].plot(time, y)
axes[1].grid(True)
axes[1].set_title(f'dt = {dt: 0.2f}')

dt = 0.5
time = np.arange(0, 10.001, dt)
y = np.zeros_like(time)
y[0] = y0
for idx, t in enumerate(time[:-1]):
    y[idx+1] = y[idx] / (1 + 3*dt)
axes[2].plot(time, y)
axes[2].grid(True)
axes[2].set_title(f'dt = {dt: 0.2f}')

dt = 0.75
time = np.arange(0, 10.001, dt)
y = np.zeros_like(time)
y[0] = y0
for idx, t in enumerate(time[:-1]):
    y[idx+1] = y[idx] / (1 + 3*dt)
axes[3].plot(time, y)
axes[3].grid(True)
axes[3].set_title(f'dt = {dt: 0.2f}')

plt.tight_layout()
plt.show()
../../_images/4755f148c302c2b7ff2480c96aea058209aa6022290ce0cf30ca25aadb003f75.png

In this case, we see that the solution remains well-behaved for all the step sizes, not showing any of the instability we saw with the Forward Euler method. This is because the Backward Euler method is unconditionally stable.

2.4.2. Stability analysis#

2.4.2.1. Stability analysis of Forward Euler#

We can perform a stability analysis of the stiff problem to identify when the Forward Euler method becomes unstable. Let’s apply the method to the ODE at hand:

(2.26)#\[\begin{align} \frac{dy}{dt} &= -3 y \\ y_{i+1} &= y_i + \Delta t f(t_i, y_i) \\ y_{i+1} &= y_i + \Delta t (-3 y_i) \\ &= y_i (1 - 3 \Delta t) \\ \frac{y_{i+1}}{y_i} &= \sigma = 1 - 3 \Delta t \end{align}\]

where \(\sigma\) is the amplification factor. This defines whether the solution grows or decays each step—for a stable physical system, we expect the solution to get smaller or remain contant with each step.

Therefore, for the method to remain stable, we must have \(\sigma | \leq 1\). We can use this stability criterion to find conditions on \(\Delta t\) for stability:

(2.27)#\[\begin{gather} | \sigma | = | 1 - 3 \Delta t | \leq 1 \\ -1 \leq 1 - 3 \Delta t \leq 1 \\ -1 \leq 1 - 3 \Delta t \quad \text{or} \quad 1 - 3 \Delta t \leq 1 \\ \frac{-2}{3} \leq -\Delta t \quad \quad -\Delta t \leq 0 \\ \rightarrow \Delta t \leq \frac{2}{3} \quad \text{and} \quad \Delta t \geq 0 \\ \therefore 0 \leq \Delta t \leq \frac{2}{3} \end{gather}\]

for stability. (For safety, we might use \(\Delta t < 1/2\) for safety, to stay away from the absolute stability limit.)

The Forward Euler method is then conditionally stable.

As a general rule of thumb, all explicit methods are conditionally stable; these are methods where the recursion formula for \(y_{i+1}\) can be written and calculated explicitly in terms of known quantities.

2.4.2.2. Stability analysis of Backward Euler#

We can also perform a stability analysis on the Backward Euler method to show that its stability does not depend on the step size:

(2.28)#\[\begin{align} \frac{dy}{dt} &= -3 y \\ y_{i+1} &= y_i + \Delta t f(t_{i+1}, y_{i+1}) \\ y_{i+1} &= y_i + \Delta t (-3 y_{i+1}) \\ y_{i+1} (1 + 3 \Delta t) &= y_i \\ \sigma &= \frac{y_{i+1}}{y_i} = \frac{1}{1 + 3 \Delta t} \end{align}\]

For stability, we need \(| \sigma | \leq 1\):

(2.29)#\[\begin{align} | \sigma | &= \left| \frac{1}{1 + 3 \Delta t} \right| \leq 1 \\ \rightarrow \Delta t &> 0 \end{align}\]

Therefore the Backward Euler method is unconditionally stable.