2.3. Numerical Solutions of 1st-order ODEs#

# import libraries for numerical functions and plotting
import numpy as np
import matplotlib.pyplot as plt
# these lines are only for helping improve the display
import matplotlib_inline.backend_inline
matplotlib_inline.backend_inline.set_matplotlib_formats('pdf', 'png')
plt.rcParams['figure.dpi']= 300
plt.rcParams['savefig.dpi'] = 300

For numerically solving definite integrals (\(\int_a^b f(x) dx\)) we have methods like the trapezoidal rule and Simpson’s rule. When we need to solve 1st-order ODEs of the form

(2.5)#\[\begin{equation} y^{\prime} = \frac{dy}{dx} = f(x, y) \end{equation}\]

for \(y(x)\), we need other methods. All of them will work by starting at the initial conditions, and then using information provided by the ODE to march forward in the solution, based on an increment (i.e., step size) \(\Delta x\).

For example, let’s say we want to solve

(2.6)#\[\begin{equation} \frac{dy}{dx} = 4 x - \frac{2 y}{x} \;, \quad y(1) = 1 \end{equation}\]

This problem is fairly simple, and we can find the general and particular solutions to compare our numerical results against:

(2.7)#\[\begin{align} \text{general: } y(x) &= x^2 + \frac{x}{x^2} \\ \text{particular: } y(x) &= x^2 \end{align}\]

2.3.1. Forward Euler method#

Recall that the derivative, \(y^{\prime}\), is the same as the slope. At the starting point, \((x,y) = (1,1)\), where \(y^{\prime} = 2\), this looks like:

x_vals = np.linspace(1, 3)
y_vals = x_vals**2

plt.plot(x_vals, y_vals, label='Solution')
plt.plot([1, 2], [1, 3], '--', label='Slope at start')
plt.grid(True)
plt.legend()
plt.show()
../../_images/7fb1f3113c1363a8cd11c985781ae8d1c28a96dca2820a44fead59d44440545f.png

Remember that the slope, or derivative, is

(2.8)#\[\begin{equation} \text{slope} = \frac{\text{rise}}{\text{run}} = \frac{\Delta y}{\Delta x} \end{equation}\]

Let’s consider the initial condition—the starting point—as \((x_i, y_i)\), and the next point in our numerical solution is \((x_{i+1}, y_{i+1})\), where \(i\) represents an index starting at 1 and ending at the number of steps \(N\). Our step size is then \(\Delta x = x_{i+1} - x_i\).

Based on our (very simple) approximation to the first derivative based on slope, we can relate the derivative to our two points:

(2.9)#\[\begin{equation} \left(\frac{dy}{dx}\right)_{i} = \frac{y_{i+1} - y_i}{x_{i+1} - x_i} = \frac{y_{i+1} - y_i}{\Delta x} \end{equation}\]

Then, solve this for our unknown:

(2.10)#\[\begin{equation} y_{i+1} = y_i + \left(\frac{dy}{dx}\right)_i \Delta x \end{equation}\]

This is the Forward Euler method.

Based on a given step size \(\Delta x\), we’ll use this formula (called a recursion formula) to march forward and obtain the full solution over given \(x\) domain. That will look something like this:

x_exact = np.linspace(1, 3)
y_exact = x_exact**2

plt.plot(x_exact, y_exact, label='Exact solution')

# our derivative function, dy/dx
dydx = lambda x,y: 4*x - (2*y)/x

dx = 0.5
x_vals = np.arange(1, 3.1, dx)
y_vals = np.zeros_like(x_vals)

# set initial condition
y_vals[0] = 1

# iterate
for idx, x_val in enumerate(x_vals[:-1]):
    y_vals[idx+1] = y_vals[idx] + dydx(x_val, y_vals[idx])*dx

plt.plot(
    x_vals, y_vals, 
    marker='.', markersize=15, 
    linestyle='--', label='Numerical solution'
    )

plt.grid(True)
plt.legend()
plt.show()
../../_images/28c5b3e6cb52d76dd169641ec125a483db34dbc4789e63abee8d4f82aab3360f.png

Another way to obtain the recursion formula for the Forward Euler method is to use a Taylor series expansion. Recall that for well-behaved functions, the Taylor series expansion says

(2.11)#\[\begin{equation} y(x + \Delta x) = y(x) + \Delta x y^{\prime}(x) + \frac{1}{2} \Delta x^2 y^{\prime\prime}(x) + \frac{1}{3!} \Delta x^3 y^{\prime\prime\prime}(x) \dots \;. \end{equation}\]

This is exact for an infinite series. We can apply this formula to our (unknown) solution \(y_i\) and cut off the terms of order \(\Delta x^2\) and higher; the derivative \(y^{\prime}\) is given by our original ODE. This gives us the same recursion formula as above:

(2.12)#\[\begin{equation} \therefore y_{i+1} \approx y_i + \left( \frac{dy}{dx}\right)_i \Delta x \end{equation}\]

where we can now see that we are introducing some error on the order of \(\Delta x^2\) at each step. This is the local truncation error. The global error is the accumulation of error over all the steps, and is on the order of \(\Delta x\). Thus, the Forward Euler method is a first-order method, because its global error is on the order of the step size to the first power: error \(\sim \mathcal{O}(\Delta x)\).

Forward Euler is also an explicit method, because its recursion formula is explicity defined for \(y_{i+1}\). (You’ll see when that may not be the case soon.)

In general, for an \(n\)th-order method:

(2.13)#\[\begin{align} \text{local error } &\sim \mathcal{O}(\Delta x^{n+1}) \\ \text{global error } &\sim \mathcal{O}(\Delta x^{n}) \end{align}\]

(This only applies for \(\Delta x < 1\); in cases where you have a \(\Delta x > 1\), you should nondimensionalize the problem based on the domain size such that \(0 \leq x \leq 1\).)

Applying the Forward Euler method then requires:

  1. Have a given first-order ODE: \(\frac{dy}{dx} = y^{\prime} = f(x,y)\). Complex and/or nonlinear problems are fine!

  2. Specify the step size \(\Delta x\) (or \(\Delta t\)).

  3. Specify the domain over which to integrate: \(x_1 \leq x \leq x_n\)

  4. Specify the initial condition: \(y(x=x_1) = y_1\)

2.3.1.1. Example#

Let’s do another example:

(2.14)#\[\begin{equation} y^{\prime} = 8 e^{-x}(1+x) - 2y \end{equation}\]

with the initial condition \(y(0) = 1\), and the domain \(0 \leq x \leq 7\). This is a linear 1st-order ODE that we can find the analytical solution for comparison:

(2.15)#\[\begin{equation} y(x) = e^{-2x} (8 x e^x + 1) \end{equation}\]

To solve, we’ll create an anonymous function for the derivative and then incorporate that into our Forward Euler code. We’ll start with \(\Delta x = 0.2\).

dydx = lambda x,y: 8*np.exp(-x)*(1 + x) - 2*y

dx = 0.2
x_vals = np.arange(0, 7.1, dx)
y_vals = np.zeros_like(x_vals)

# initial condition
y_vals[0] = 1

# forward Euler loop
for idx, x_val in enumerate(x_vals[:-1]):
    y_vals[idx+1] = y_vals[idx] + dx * dydx(x_val, y_vals[idx])

x_exact = np.linspace(0, 7)
y_exact = np.exp(-2*x_exact)*(8*x_exact*np.exp(x_exact) + 1)

plt.plot(x_exact, y_exact, label='Exact solution')
plt.plot(x_vals, y_vals, 'o--', label='Forward Euler solution')
plt.legend()
plt.grid(True)
plt.show()
../../_images/547ded3933289e5980517b8ad923644a0c81db121dd50df7e9fdcd9370621e9d.png

Notice the visible error in that plot, which is between 0.2–0.25, or in other words \(\mathcal{O}(\Delta x)\).

How can we reduce the error? Just like with the trapezoidal rule, we have two main options:

  • Reduce the step size \(\Delta x\)

  • Choose a higher-order (i.e., more accurate) method

The downside to reducing \(\Delta x\) is the increased number of steps we then have to take, which may make the solution too computationally expensive. A more-accurate method would have less error per step, which might allow us to use the same \(\Delta x\) but get a better solution. Let’s next consider some better methods.

2.3.2. Heun’s method#

Heun’s method is a predictor-corrector method; these work by predicting a solution at some intermediate location and then using that information to get a better overall answer at the next location (correcting). Heun’s uses the Forward Euler method to predict the solution at \(x_{i+1}\), then uses the average of the slopes at \(y_i\) and the predicted \(y_{i+1}\) to get a better overall answer for \(y_{i+1}\).

(2.16)#\[\begin{align} \text{predictor: } y_{i+1}^p &= y_i + \Delta x f(x_i, y_i) \\ \text{corrector: } y_{i+1} &= y_i + \frac{\Delta x}{2} \left( f(x_i, y_i) + f(x_{i+1}, y_{i+1}^p) \right) \end{align}\]

Heun’s method is second-order accurate, meaning the global error is \(\mathcal{O}(\Delta x^2)\) and explicit.

Let’s see this method in action:

dydx = lambda x,y: 8*np.exp(-x)*(1 + x) - 2*y

dx = 0.2
x_vals = np.arange(0, 7.1, dx)
y_vals = np.zeros_like(x_vals)

# initial condition
y_vals[0] = 1

# Heun's method loop
for idx, x_val in enumerate(x_vals[:-1]):
    y_p = y_vals[idx] + dx * dydx(x_val, y_vals[idx])
    y_vals[idx+1] = y_vals[idx] + (dx/2)*(
        dydx(x_val, y_vals[idx]) + 
        dydx(x_vals[idx+1], y_p)
        )

x_exact = np.linspace(0, 7)
y_exact = np.exp(-2*x_exact)*(8*x_exact*np.exp(x_exact) + 1)

plt.plot(x_exact, y_exact, label='Exact solution')
plt.plot(x_vals, y_vals, 'o--', label="Heun's method solution")
plt.legend()
plt.grid(True)
plt.show()

print(f'Maximum error: {np.abs(np.max(y_exact) - np.max(y_vals)): .4f}')
../../_images/123b818533f64200547fdd271c7dd70616028fed60187a14096a08105d0bf30e.png
Maximum error:  0.0531

Notice how the error is visibly smaller than for the Forward Euler method–the maximum error is around 0.05, which is very close to \(\Delta x^2 = 0.04\).

2.3.3. Midpoint method#

The midpoint method, also known as the modified Euler method, is another predictor-corrector method, that instead predicts the solution at the midpoint (\(x + \Delta x/2\)):

(2.17)#\[\begin{align} y_{i + \frac{1}{2}} &= y_i + \frac{\Delta x}{2} f(x_i, y_i) \\ y_{i+1} &= y_i + \Delta x f \left( x_{i+\frac{1}{2}} , y_{i + \frac{1}{2}} \right) \end{align}\]

Like Heun’s method, the midpoint method is explicit and second-order accurate:

dydx = lambda x,y: 8*np.exp(-x)*(1 + x) - 2*y

dx = 0.2
x_vals = np.arange(0, 7.1, dx)
y_vals = np.zeros_like(x_vals)

# initial condition
y_vals[0] = 1

# midpoint method loop
for idx, x_val in enumerate(x_vals[:-1]):
    y_half = y_vals[idx] + 0.5 * dx * dydx(x_val, y_vals[idx])
    y_vals[idx+1] = y_vals[idx] + dx*dydx(x_val + dx/2, y_half)

x_exact = np.linspace(0, 7)
y_exact = np.exp(-2*x_exact)*(8*x_exact*np.exp(x_exact) + 1)

plt.plot(x_exact, y_exact, label='Exact solution')
plt.plot(x_vals, y_vals, 'o--', label="Midpoint method solution")
plt.legend()
plt.grid(True)
plt.show()

print(f'Maximum error: {np.abs(np.max(y_exact) - np.max(y_vals)): .4f}')
../../_images/2450bceeded758f2a5f7031b98d90e7f50dce3aeaadfabaf89161ca8c60c416c.png
Maximum error:  0.0481

2.3.4. Fourth-order Runge–Kutta method#

Runge–Kutta methods are a family of methods that use one or more stages; the methods we have discussed so far (Forward Euler, Heun’s, and midpoint) actually all fall in this family. There is also a popular fourth-order method: the fourth-order Runge–Kutta method (RK4). This uses four stages to get a more accurate solution:

(2.18)#\[\begin{align} y_{i+1} &= y_i + \frac{\Delta x}{6} (k_1 + 2 k_2 + 2 k_3 + k_4) \\ k_1 &= f(x_i, y_i) \\ k_2 &= f \left( x_i + \frac{\Delta x}{2}, y_i + \frac{\Delta x}{2} k_1 \right) \\ k_3 &= f \left( x_i + \frac{\Delta x}{2}, y_i + \frac{\Delta x}{2} k_2 \right) \\ k_4 &= f \left( x_i + \Delta x, y_i + \Delta x \, k_3 \right) \end{align}\]

This method is explicit and fourth-order accurate: error \(\sim \mathcal{O}(\Delta x^4)\):

dydx = lambda x,y: 8*np.exp(-x)*(1 + x) - 2*y

dx = 0.2
x_vals = np.arange(0, 7.1, dx)
y_vals = np.zeros_like(x_vals)

# initial condition
y_vals[0] = 1

# 4th order Runge-Kutta iteration
for idx, x_val in enumerate(x_vals[:-1]):
    k1 = dydx(x_val, y_vals[idx])
    k2 = dydx(x_val + dx/2, y_vals[idx] + dx*k1/2)
    k3 = dydx(x_val + dx/2, y_vals[idx] + dx*k2/2)
    k4 = dydx(x_val + dx, y_vals[idx] + dx*k3)
    y_vals[idx+1] = y_vals[idx] + dx*(
        k1 + 2*k2 + 2*k3 + k4
        ) / 6

x_exact = np.linspace(0, 7)
y_exact = lambda x: np.exp(-2*x)*(8*x*np.exp(x) + 1)

plt.plot(x_exact, y_exact(x_exact), label='Exact solution')
plt.plot(x_vals, y_vals, 'o--', label="Runge-Kutta solution")
plt.legend()
plt.grid(True)
plt.show()

print(f'Maximum error: {np.max(np.abs(y_exact(x_vals)-y_vals)): .5f}')
../../_images/81e1c145c6c9c821ab529c509ded1d9e1364677036630eceddcf7ada27e60f40.png
Maximum error:  0.00038

The maximum error (0.0004) is actually a bit smaller than \(\Delta x^4 = 0.0016\), but approximately the same order of magnitude.

SciPy also offers a built-in RK4 integrator via the solve_ivp function, part of the scipy.integrate module. The RK4 method is the default one used, and can also be explicitly specified with the method='RK45' argument.

You can call this function to perform the integration for you with the synxtax:

sol = solve_ivp(func, t_span, y0)

where func is the name of a function that provides the derivative; t_span provides the domain of integration (\(x_{\text{start}} \leq x \leq x_{\text{end}}\)), and y0 provides the initial condition \(y(x=x_{\text{start}})\).

The return object, sol, contains both the integrated results and other information about the integration process. The field sol.t contains the automatically generated “\(x\)” points (though called t, since time is most commonly the independent variable in ODE problems). The field sol.y contains the integrated \(y\) values.

For example, let’s use this and compare with our exact solution:

from scipy.integrate import solve_ivp

# derivative function to be integrated
dydx = lambda x,y: 8*np.exp(-x)*(1 + x) - 2*y

sol = solve_ivp(dydx, [0, 7], [1])

x_exact = np.linspace(0, 7)
y_exact = lambda x: np.exp(-2*x)*(8*x*np.exp(x) + 1)

plt.plot(x_exact, y_exact(x_exact), label='Exact solution')
plt.plot(sol.t, sol.y[0,:], 'o--', label='RK45 solution')
plt.legend()
plt.grid(True)
plt.show()

print(f'Maximum error: {np.max(np.abs(y_exact(sol.t) - sol.y[0,:])): .5f}')
../../_images/22d195654c8793213758e27ca9e7cacc9cb66e20e89b033d6e3e0a38a644c599.png
Maximum error:  0.00052

You can see that the maximum error is slightly higher than what we saw with our own implementation. This is because the solve_ivp function used larger step sizes, likely because this error is within the default tolerances.

We can actually specify the maximum step size to match what we used in the first Runge-Kutta example:

sol = solve_ivp(dydx, [0, 7], [1], max_step=0.2)

plt.plot(x_exact, y_exact(x_exact), label='Exact solution')
plt.plot(sol.t, sol.y[0,:], 'o--', label='RK45 solution')
plt.legend()
plt.grid(True)
plt.show()

print(
    'Maximum error: '
    f'{np.max(np.abs(y_exact(sol.t) - sol.y[0,:])): .6f}'
    )
../../_images/3d211484b4468ff923661d7325cd9e10c0124fd38763f9762f5efad04e6a1653.png
Maximum error:  0.000006