5.1. Elliptic PDEs#
# import libraries for numerical functions and plotting
import numpy as np
import matplotlib.pyplot as plt
# these lines are only for helping improve the display
import matplotlib_inline.backend_inline
matplotlib_inline.backend_inline.set_matplotlib_formats('pdf', 'png')
plt.rcParams['figure.dpi']= 300
plt.rcParams['savefig.dpi'] = 300
The classic example of an elliptic PDE is Laplace’s equation (yep, the same Laplace that gave us the Laplace transform), which in two dimensions for a variable \(u(x,y)\) is
where \(\nabla\) is del, or nabla, and represents the gradient operator: \(\nabla = \frac{\partial}{\partial x} + \frac{\partial}{\partial y}\).
Laplace’s equation shows up in a number of physical problems, including heat transfer, fluid dynamics, and electrostatics. For example, the heat equation for conduction in two dimensions is
where \(u(x,y,t)\) is temperature and \(\alpha\) is thermal diffusivity. Steady-state heat transfer (meaning after any initial transient period) is then described by Laplace’s equation.
A related elliptic PDE is Poisson’s equation:
which also appears in multiple physical problems—most notably, when solving for pressure in the Navier–Stokes equations.
To numerically solve these equations, and any elliptic PDE, we can use finite differences, where we replace the continuous \(x,y\) domain with a discrete grid of points. This is similar to what we did with boundary-value problems in one dimension—but now we have two dimensions.
To approximate the second derivatives in Laplace’s equation, we can use central differences in both the \(x\) and \(y\) directions, applied around the \(u_{i,j}\) point:
where \(i\) is the index used in the \(x\) direction, \(j\) is the index in the \(y\) direction, and \(\Delta x\) and \(\Delta y\) are the step sizes in the \(x\) and \(y\) directions. In other words, \(x_i = (i-1) \Delta x\) and \(y_j = (j-1) \Delta y\).
The following figure shows the points necessary to approximate the partial derivatives in the PDE at a location \((x_i, y_j)\), for a general 2D region. This is known as a five-point stencil:
Applying these finite differences gives us an approximation for Laplace’s equation:
If we use a uniform grid where \(\Delta x = \Delta y = h\), then we can simplify to
5.1.1. Example: heat transfer in a square plate#
As an example, let’s consider the problem of steady-state heat transfer in a square solid object. If \(u(x,y)\) is temperature, then this is described by Laplace’s equation:
and we can solve this using finite differences. Using a uniform grid where \(\Delta x = \Delta y = h\), Laplace’s equation gives us a recursion formula that relates the values at neighboring points:
Consider a case where the square has sides of length \(L\), and the boundary conditions are that the temperature is fixed at 100 on the left, right, and bottom sides, and fixed at 0 on the top. For now, we’ll use two segments to discretize the domain in each directions, giving us nine total points in the grid. The following figures show the example problem, and the grid of points we’ll use.
Using the above recursion formula, we can write an equation for each of the nine unknown points (in the interior, not the boundary points):
where \(u_{i,j}\) are the unknowns. Note that in this we used the side boundary condition values for the corner points \(u_{0,2}\) and \(u_{2,2}\), rather than the top value. (In reality this would represent a discontinuity in temperature, so these aren’t very realistic boundary conditions.)
This is a system of linear equations, that we can represent as a matrix-vector product:
where \(A\) is a \(9\times 9\) coefficient matrix, \(\mathbf{u}\) is a nine-element vector of unknown variables, and \(\mathbf{b}\) is a nine-element right-hand side vector. For \(\mathbf{u}\), we had to take variables that physically represent points in a two-dimensional space and combine them in some order to form a one-dimensional column vector. Here, we used a row-major mapping, where we started with the point in the first row and first column, then added the remaining points in that row, before moving to the next row and repeating. We’ll discuss this a bit more later.
If we set this up in Python, we can solve using np.linalg.solve()
:
A = np.array([
[1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 1, 0, 1,-4, 1, 0, 1, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 1]
])
b = np.array([100, 100, 100, 100, 0, 100, 100, 0, 100])
u = np.linalg.solve(A, b)
print(u)
[100. 100. 100. 100. 75. 100. 100. 0. 100.]
# alternatively, for creating the A matrix, we could create
# a diagonal matrix of ones and then only modify the one row
# that needs it
A = np.eye(9)
A[4, 1] = 1
A[4, 3] = 1
A[4, 4] = -4
A[4, 5] = 1
A[4, 7] = 1
u2 = np.linalg.solve(A, b)
assert np.all(u == u2)
This gives us the values for temperature at each of the nine points. In this example, we really only have one unknown temperature: \(u_{1,1}\), located in the middle. Does the value given make sense? We can check by rearranging the recursion formula for Laplace’s equation:
which shows that in such problems the value of the middle point should be the average of the four surrounding points. This matches the value of 75 found above.
We can use a contour plot to visualize the results, though we’ll need to convert the one-dimensional solution array into a two-dimensional matrix to plot. The NumPy reshape()
function can help us here: it can reshapes a one-dimensional array into a two-dimensional array, by specifying the target number of desired rows and columns:
# Example of using the reshape method, with a simple array
# going from 1 to 10
# We want to convert it into a 2D array with 2 row and 5 columns.
# The expected output is:
# [[0, 1, 2, 3, 4],
# [5, 6, 7, 8, 9]]
b = np.array(range(10))
A = np.reshape(b, [2, 5])
print(A)
[[0 1 2 3 4]
[5 6 7 8 9]]
# We can use the reshape function to convert the calculated
# temperatures into a 3x3 array:
num_x = 3
num_y = 3
u_square = np.reshape(u, [num_y, num_x])
plt.contourf(u_square, levels=10)
plt.colorbar()
plt.show()
Overall that looks correct: the boundary conditions are right, and we see that the center is the average of the boundaries.
But, clearly only using nine points (with eight of those being boundary conditions) doesn’t give us a very good solution. To make this more accurate, we’ll need to use more points, which also means we need to automate the construction of the system of equations.
5.1.2. Row-major mapping#
For a two-dimensional elliptic PDE like Laplace’s equation, we can generate a general recursion formula, but we need a way to take a grid of points where location is defined by row and column index and map these into a one-dimensional column vector, which has its own index.
The following figure shows a general 2D grid of points, with \(n\) number of columns in the \(x\) direction (using index \(i\)) and \(m\) number of rows in the \(y\) direction (using index \(j\)):
We want to convert the rows and columns of \(u_{i,j}\) points defined by column and row index into a single column array using a different index, \(k\) (this choice is arbitrary):
where \(k\) refers to the index used in that array.
To do this mapping, we can use this formula:
where \(k_{i,j}\) refers to the 1D index \(k\) mapped from the 2D indices \(i\) and \(j\).
For example, in this \(3\times 3\) grid, where \(n=3\) and \(m=3\), consider the point where \(i=1\) and \(j=1\) (the point right in the center). Using our formula,
which matches what we can visually confirm: the fifth point, which would have index 4.
Using that mapping, we can also identify the 1D indices associated with the points surrounding location \((i,j)\):
which we can use to determine the appropriate locations to place values in the coefficient and right-hand side matrices.
5.1.3. Example: heat transfer in a square plate (redux)#
Let’s return to the example of steady-state heat transfer in a square plate—but this time we’ll set the solution up more generally so we can vary the step size \(h = \Delta x = \Delta y\).
h = 0.1
length_x = 1.0
length_y = 1.0
x_vals = np.arange(0, length_x + 0.001, h)
y_vals = np.arange(0, length_y + 0.001, h)
# The coefficient matrix A is now m*n by m*n,
# since that is the total number of points.
# The right-hand side vector b is m*n by 1.
A = np.zeros((len(x_vals)*len(y_vals), len(x_vals)*len(y_vals)))
b = np.zeros(len(x_vals)*len(y_vals))
u_left = 100
u_right = 100
u_bottom = 100
u_top = 0
for j, y in enumerate(y_vals):
for i, x in enumerate(x_vals):
# for convenience, calculate all indices now
kij = j*len(x_vals) + i
kim1j = j*len(x_vals) + i - 1
kip1j = j*len(x_vals) + i + 1
kijm1 = (j-1)*len(x_vals) + i
kijp1 = (j+1)*len(x_vals) + i
if i == 0:
# this is the left boundary
A[kij, kij] = 1
b[kij] = u_left
elif i == len(x_vals) - 1:
# right boundary
A[kij, kij] = 1
b[kij] = u_right
elif j == 0:
# bottom boundary
A[kij, kij] = 1
b[kij] = u_bottom
elif j == len(y_vals) - 1:
# top boundary
A[kij, kij] = 1
b[kij] = u_top
else:
# coefficients for interior points, based
# on the recursion formula
A[kij, kim1j] = 1
A[kij, kip1j] = 1
A[kij, kijm1] = 1
A[kij, kijp1] = 1
A[kij, kij] = -4
u = np.linalg.solve(A, b)
u_square = np.reshape(u, (len(y_vals), len(x_vals)))
plt.contourf(x_vals, y_vals, u_square, levels=10)
plt.colorbar(label='Temperature')
plt.show()
5.1.4. Neumann (derivative) boundary conditions#
So far, we have only discussed cases where we have Dirichlet boundary conditions; in other words, when we have all fixed values at the boundary. Frequently we also encounter Neumann-style boundary conditions, where we have the derivative specified at the boundary.
We can handle this in the same way we do for one-dimensional boundary value problems: either with a forward or backward difference (both of which are first-order accurate), or with a central difference using an imaginary point/ghost node (which is second-order accurate). Let’s focus on using the central difference, since it is more accurate.
For example, let’s say that at the upper boundary, the derivative of temperature is zero:
Let’s consider this boundary condition applied at the point shown, \(u_{2,3}\). We can approximate this derivative using a central difference:
This tells us the value of the point above the boundary, \(u_{2,4}\); however, this point is a “ghost” or imaginary point located outside the boundary, so we don’t really care about its value. Instead, we can use this relationship to give us a usable equation for the boundary point, by incorporating it into the normal recursion formula for Laplace’s equation:
The recursion formula for points along the upper boundary would then become
Now let’s try solving the above example, but with \(\frac{\partial u}{\partial y} = 0\) at the top boundary and \(u = 0\) at the bottom boundary:
h = 0.1
length_x = 1.0
length_y = 1.0
x_vals = np.arange(0, length_x + 0.001, h)
y_vals = np.arange(0, length_y + 0.001, h)
# The coefficient matrix A is now m*n by m*n,
# since that is the total number of points.
# The right-hand side vector b is m*n by 1.
A = np.zeros((len(x_vals)*len(y_vals), len(x_vals)*len(y_vals)))
b = np.zeros(len(x_vals)*len(y_vals))
u_left = 100
u_right = 100
u_bottom = 0
# u_top is not needed, insulated BC
for j, y in enumerate(y_vals):
for i, x in enumerate(x_vals):
# for convenience, calculate all indices now
kij = j*len(x_vals) + i
kim1j = j*len(x_vals) + i - 1
kip1j = j*len(x_vals) + i + 1
kijm1 = (j-1)*len(x_vals) + i
kijp1 = (j+1)*len(x_vals) + i
if i == 0:
# this is the left boundary
A[kij, kij] = 1
b[kij] = u_left
elif i == len(x_vals) - 1:
# right boundary
A[kij, kij] = 1
b[kij] = u_right
elif j == 0:
# bottom boundary
A[kij, kij] = 1
b[kij] = u_bottom
elif j == len(y_vals) - 1:
# top boundary, using the ghost node + recursion formula
A[kij, kim1j] = 1
A[kij, kip1j] = 1
A[kij, kijm1] = 2
A[kij, kij] = -4
else:
# coefficients for interior points, based
# on the recursion formula
A[kij, kim1j] = 1
A[kij, kip1j] = 1
A[kij, kijm1] = 1
A[kij, kijp1] = 1
A[kij, kij] = -4
u = np.linalg.solve(A, b)
u_square = np.reshape(u, (len(y_vals), len(x_vals)))
plt.contourf(x_vals, y_vals, u_square, levels=20)
plt.colorbar(label='Temperature')
plt.show()
5.1.5. Iterative solutions for (very) large problems#
So far, we’ve been able to solve our systems of linear equations in Python by using np.linalg.solve()
, which directly finds the solution to the equation \(A \mathbf{y} = \mathbf{b}\).
However, this approach will become very slow as the grid resolution (\(h = \Delta x = \Delta y\)) becomes smaller, and eventually unfeasable due to the associated computational requirements. First, let’s create a function that takes as input the segment size \(h\), then returns the time it takes to solve the problem for different sizes.
import time
def heat_equation(h):
'''Solves 2D steady heat equation problem given step size'''
t0 = time.time()
length_x = 1.0
length_y = 1.0
x_vals = np.arange(0, length_x + 0.001, h)
y_vals = np.arange(0, length_y + 0.001, h)
num_pts = len(x_vals)*len(y_vals)
# The coefficient matrix A is now m*n by m*n,
# since that is the total number of points.
# The right-hand side vector b is m*n by 1.
A = np.zeros((num_pts, num_pts))
b = np.zeros(num_pts)
u_left = 100
u_right = 100
u_bottom = 100
u_top = 0
for j, y in enumerate(y_vals):
for i, x in enumerate(x_vals):
# for convenience, calculate all indices now
kij = j*len(x_vals) + i
kim1j = j*len(x_vals) + i - 1
kip1j = j*len(x_vals) + i + 1
kijm1 = (j-1)*len(x_vals) + i
kijp1 = (j+1)*len(x_vals) + i
if i == 0:
# this is the left boundary
A[kij, kij] = 1
b[kij] = u_left
elif i == len(x_vals) - 1:
# right boundary
A[kij, kij] = 1
b[kij] = u_right
elif j == 0:
# bottom boundary
A[kij, kij] = 1
b[kij] = u_bottom
elif j == len(y_vals) - 1:
# top boundary
A[kij, kij] = 1
b[kij] = u_top
else:
# coefficients for interior points, based
# on the recursion formula
A[kij, kim1j] = 1
A[kij, kip1j] = 1
A[kij, kijm1] = 1
A[kij, kijp1] = 1
A[kij, kij] = -4
u = np.linalg.solve(A, b)
u_square = np.reshape(u, (len(y_vals), len(x_vals)))
return num_pts, (time.time() - t0)
Now, we can see how long it takes to solve as we increase the resolution, and get an idea about the relationship between time-to-solution and number of unknowns.
step_sizes = [0.2, 0.1, 0.05, 0.025, 0.02, 0.0125, 0.01]
num_pts = np.zeros(len(step_sizes))
times = np.zeros_like(num_pts)
for idx, step_size in enumerate(step_sizes):
num_pts[idx], times[idx] = heat_equation(step_size)
plt.loglog(num_pts, times, 'o', label='Actual cost')
#plt.loglog(nums, times, '-o')
plt.xlabel('Number of unknowns');
plt.ylabel('Time for direct solution (sec)')
x = num_pts[:]
n2 = x**2 * (times[1] / x[1]**2)
n3 = x**3 * (times[1] / x[1]**3)
plt.loglog(x, n2, '--', label='quadratic')
plt.loglog(x, n3, '--', label='cubic')
plt.legend()
plt.grid(True)
plt.show()
After about 100 points/unknowns, the cost begins increasing exponentially with the number of points, somewhere between quadratic (\(\mathcal{O}(n^2)\)) and cubic (\(\mathcal{O}(n^3)\)).
If we try to reduce the step size further, for example to 0.005, we’ll see that we cannot get a solution in a reasonable amount of time. But, clearly we want to get solutions for large numbers of unknowns, so what can we do?
We can solve larger systems of linear equations using iterative methods. There are a number of these, and we’ll focus on two:
Jacobi method
Gauss-Seidel method
5.1.5.1. Jacobi method#
The Jacobi method essentially works by starting with an initial guess to the solution, then using the recursion formula to solve for values at each point, then repeating this until the values converge (i.e., stop changing).
An algorithm we can use to solve Laplace’s equation:
Set some initial guess for all unknowns: \(u_{i,j}^{\text{old}}\)
Set the boundary values
For each point in the interior, use the recursion formula to solve for new values based on old values at the surrounding points: \(u_{i,j} = \left( u_{i+1,j}^{\text{old}} + u_{i-1,j}^{\text{old}} + u_{i,j+1}^{\text{old}} + u_{i,j-1}^{\text{old}} \right)/4\).
Check for convergence: is \(\epsilon\) less than some tolerance, such as \(10^{-6}\)? Where \(\epsilon = \max \left| u_{i,j} - u_{i,j}^{\text{old}} \right|\). If no, then return to step 2 and repeat.
More formally, if we have a system \(A \mathbf{x} = \mathbf{b}\), where
then we can solve iterative for \(\mathbf{x}\) using
where \(x_i^{(k)}\) is a value of the solution at iteration \(k\) and \(x_i^{(k+1)}\) is at the next iteration.
def heat_equation_jacobi(h):
'''Solves heat equation using Jacobi iteration'''
t0 = time.time()
length_x = 1.0
length_y = 1.0
x_vals = np.arange(0, length_x + 0.001, h)
y_vals = np.arange(0, length_y + 0.001, h)
num_pts = len(x_vals)*len(y_vals)
# initial guess
u = 100 * np.ones(num_pts)
u_left = 100
u_right = 100
u_bottom = 100
u_top = 0
# dummy value for residual variable
epsilon = 1.0
num_iter = 0
while epsilon > 1e-2:
u_old = np.copy(u)
epsilon = 0.0
for j, y in enumerate(y_vals):
for i, x in enumerate(x_vals):
# for convenience, calculate all indices now
kij = j*len(x_vals) + i
kim1j = j*len(x_vals) + i - 1
kip1j = j*len(x_vals) + i + 1
kijm1 = (j-1)*len(x_vals) + i
kijp1 = (j+1)*len(x_vals) + i
if i == 0:
# this is the left boundary
u[kij] = u_left
elif i == len(x_vals) - 1:
# right boundary
u[kij] = u_right
elif j == 0:
# bottom boundary
u[kij] = u_bottom
elif j == len(y_vals) - 1:
# top boundary
u[kij] = u_top
else:
# interior points
u[kij] = (
u_old[kip1j] + u_old[kim1j] +
u_old[kijm1] + u_old[kijp1]
) / 4.0
epsilon = np.max(np.abs(u - u_old))
num_iter += 1
u_square = np.reshape(u, (len(y_vals), len(x_vals)))
# use a dict to package a large number of return objects
solution = {'num_points': num_pts,
'time': time.time() - t0,
'num_iter': num_iter,
'u': u_square,
'x': x_vals,
'y': y_vals
}
return solution
step_sizes = [0.2, 0.1, 0.05, 0.025, 0.02, 0.0125, 0.01, 0.005]
num_pts_jac = np.zeros(len(step_sizes))
times_jac = np.zeros_like(num_pts_jac)
num_iter_jac = np.zeros_like(num_pts_jac)
for idx, step_size in enumerate(step_sizes):
sol = heat_equation_jacobi(step_size)
num_pts_jac[idx] = sol['num_points']
times_jac[idx] = sol['time']
num_iter_jac[idx] = sol['num_iter']
x_vals = sol['x']
y_vals = sol['y']
u = sol['u']
print(f"Number of iterations: {sol['num_iter']}")
Number of iterations: 33
Number of iterations: 107
Number of iterations: 317
Number of iterations: 822
Number of iterations: 1062
Number of iterations: 1637
Number of iterations: 1910
---------------------------------------------------------------------------
KeyboardInterrupt Traceback (most recent call last)
Cell In[13], line 7
5 num_iter_jac = np.zeros_like(num_pts_jac)
6 for idx, step_size in enumerate(step_sizes):
----> 7 sol = heat_equation_jacobi(step_size)
8 num_pts_jac[idx] = sol['num_points']
9 times_jac[idx] = sol['time']
Cell In[12], line 34, in heat_equation_jacobi(h)
32 kim1j = j*len(x_vals) + i - 1
33 kip1j = j*len(x_vals) + i + 1
---> 34 kijm1 = (j-1)*len(x_vals) + i
35 kijp1 = (j+1)*len(x_vals) + i
36 if i == 0:
37 # this is the left boundary
KeyboardInterrupt:
5.1.5.2. Gauss-Seidel method#
The Gauss-Seidel method is very similar to the Jacobi method, but with one important difference: rather than using all old values to calculate the new values, incorporate updated values as they are available. Because the method incorporates newer information more quickly, it tends to converge faster (meaning, with fewer iterations) than the Jacobi method.
Formally, if we have a system \(A \mathbf{x} = \mathbf{b}\), where
then we can solve iterative for \(\mathbf{x}\) using
where \(x_i^{(k)}\) is a value of the solution at iteration \(k\) and \(x_i^{(k+1)}\) is at the next iteration.
def heat_equation_gauss_seidel(h):
'''Solves heat equation using the Gauss-Seidel method'''
t0 = time.time()
length_x = 1.0
length_y = 1.0
x_vals = np.arange(0, length_x + 0.001, h)
y_vals = np.arange(0, length_y + 0.001, h)
num_pts = len(x_vals)*len(y_vals)
# initial guess
u = 100 * np.ones(num_pts)
u_left = 100
u_right = 100
u_bottom = 100
u_top = 0
# dummy value for residual variable
epsilon = 1.0
num_iter = 0
while epsilon > 1e-2:
u_old = np.copy(u)
epsilon = 0.0
for j, y in enumerate(y_vals):
for i, x in enumerate(x_vals):
# for convenience, calculate all indices now
kij = j*len(x_vals) + i
kim1j = j*len(x_vals) + i - 1
kip1j = j*len(x_vals) + i + 1
kijm1 = (j-1)*len(x_vals) + i
kijp1 = (j+1)*len(x_vals) + i
if i == 0:
# this is the left boundary
u[kij] = u_left
elif i == len(x_vals) - 1:
# right boundary
u[kij] = u_right
elif j == 0:
# bottom boundary
u[kij] = u_bottom
elif j == len(y_vals) - 1:
# top boundary
u[kij] = u_top
else:
# interior points
u[kij] = (
u[kip1j] + u[kim1j] +
u[kijm1] + u[kijp1]
) / 4.0
epsilon = np.max(np.abs(u - u_old))
num_iter += 1
u_square = np.reshape(u, (len(y_vals), len(x_vals)))
# use a dict to package a large number of return objects
solution = {'num_points': num_pts,
'time': time.time() - t0,
'num_iter': num_iter,
'u': u_square,
'x': x_vals,
'y': y_vals
}
return solution
step_sizes = [0.2, 0.1, 0.05, 0.025, 0.02, 0.0125, 0.01, 0.005]
num_pts_gs = np.zeros(len(step_sizes))
times_gs = np.zeros_like(num_pts_gs)
num_iter_gs = np.zeros_like(num_pts_gs)
for idx, step_size in enumerate(step_sizes):
sol = heat_equation_gauss_seidel(step_size)
num_pts_gs[idx] = sol['num_points']
times_gs[idx] = sol['time']
num_iter_gs[idx] = sol['num_iter']
x_vals = sol['x']
y_vals = sol['y']
u = sol['u']
print(f"Number of iterations: {sol['num_iter']}")
Number of iterations: 21
Number of iterations: 64
Number of iterations: 193
Number of iterations: 533
Number of iterations: 716
Number of iterations: 1224
Number of iterations: 1502
Number of iterations: 2275
plt.loglog(num_pts, times, '-o', label='Direct solution')
plt.loglog(num_pts_jac, times_jac, '-^', label='Jacobi solution')
plt.loglog(num_pts_gs, times_gs, '-x', label='Gauss-Seidel solution')
plt.xlabel('Number of unknowns')
plt.ylabel('Time for solution (sec)')
plt.legend()
plt.grid()
plt.show()
We can see that for smaller numbers of points/unknowns, the direct solution method is faster, but at some point the costs of solving directly (meaning using linear algebra) grow higher.
Due to time and computer memory constraints, we can’t actually show that the direct method takes longer for the smallest step size (0.005)—many computers will run out of memory!
In addition, if you look at the slopes of the costs, the iterative methods have a consistent slope, while the direct method changes slope to grow in cost faster at a certain point.
(Running these tests on different computers might generate slightly different results, but the overall conclusions should stay the same.)
plt.loglog(num_pts_jac, num_iter_jac, '-^', label='Jacobi method')
plt.loglog(num_pts_gs, num_iter_gs, '-x', label='Gauss-Seidel method')
plt.xlabel('Number of unknowns')
plt.ylabel('Number of iterations required')
plt.legend()
plt.grid()
plt.show()
These results show us a few things:
For small problem sizes, the direct solution method is faster.
For the heat equation, once we get to around 10000 unknowns, the methods perform similarly. Beyond this, the direct solution method becomes unreasonably slow, and fails to solve in a reasonable time for a step size of 0.005.
The Gauss-Seidel method generally converges with around half the number of iterations than the Jacobi method.
For larger, more-realistic problems, iterative solution methods like Jacobi and Gauss-Seidel are essential.