Welcome to the Solution of Equations!

In your earlier maths studies, you spent a lot of time solving equations like \(x^2 - 5x + 6 = 0\) using algebra. But in the real world, most equations are "messy." They don't have neat, exact answers. This is where Numerical Methods come to the rescue! Think of these methods as a way to "zoom in" on a solution until it is accurate enough for whatever job you are doing, whether it's building a bridge or calculating interest rates.

1. The Starting Point: Bracketing the Root

Before we can zoom in on a solution (a "root"), we need to know roughly where it is. We usually look for a sign change. If a continuous function \(f(x)\) is negative at one point and positive at another, it must have crossed zero somewhere in between.

Quick Review: If \(f(a)\) and \(f(b)\) have opposite signs, then \(f(a) \times f(b) < 0\), meaning there is at least one root between \(a\) and \(b\).

2. The Bisection Method

The Bisection Method is the simplest way to find a root. It’s like playing the "higher or lower" guessing game.

How it works:

1. Find an interval \([a, b]\) where a sign change occurs.
2. Find the midpoint: \(x = \frac{a+b}{2}\).
3. Check the sign of \(f(x)\).
4. Replace either \(a\) or \(b\) with the midpoint to keep the root "bracketed" in a smaller interval.
5. Repeat until the interval is small enough for your required accuracy.

Analogy: Imagine you are looking for a specific page in a book. You open it exactly in the middle. If the page you want is higher, you ignore the first half and open the second half in the middle. You keep halving the search area until you find your page!

Takeaway: It’s very reliable (it almost never fails if you have a sign change), but it’s quite slow.

3. False Position and the Secant Method

While Bisection always goes to the middle, False Position (or Linear Interpolation) tries to be smarter. It draws a straight line between \((a, f(a))\) and \((b, f(b))\) and sees where that line hits the x-axis.

The Secant Method is similar but doesn't require the root to stay "bracketed." It uses the last two estimates to draw a line and predict the next one. Don't worry if this seems tricky; just remember that these methods use straight lines to guess the curve's behavior.

4. Fixed Point Iteration: \(x = g(x)\)

This method involves rearranging your equation \(f(x) = 0\) into the form \(x = g(x)\). You then pick a starting value \(x_0\) and keep plugging it back into the formula: \(x_{n+1} = g(x_n)\).

Staircases and Cobwebs

When you graph this, you see one of two cool patterns:
- Staircase Diagram: The values approach the root from one side.
- Cobweb Diagram: The values "spiral" in toward the root (or spiral out if it fails!).

Did you know? This method only works (converges) if the gradient of your function \(g(x)\) is shallow enough near the root. Specifically, we need \(|g'(x)| < 1\). If the line is too steep, your "cobweb" will spiral away from the answer!

5. The Newton-Raphson Method

This is the "Formula 1" car of root-finding. It is usually much faster than the others. It uses the tangent (the gradient) of the curve to slide down toward the root.

The formula is:
\[x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}\]

Memory Aid: "Current minus (Value over Gradient)".

Why it might fail:

Even though it's fast, Newton-Raphson isn't perfect. It can fail if:
- The starting point is near a stationary point (where \(f'(x) = 0\)). This makes the formula divide by zero, and the "car" flies off the track!
- The curve is very flat, sending your next guess miles away from the root.

Takeaway: Newton-Raphson has second-order convergence, meaning the number of correct decimal places roughly doubles with every step!

6. Convergence and Accuracy

In your exam, you'll be asked to justify how accurate your answer is. If a question asks for a root to 3 decimal places, you should show that there is a sign change between the boundaries of that value (e.g., for \(1.234\), check \(1.2335\) and \(1.2345\)).

Key Term: Order of Convergence
- First Order: The error reduces by a constant factor each time (like Fixed Point Iteration).
- Second Order: The error reduces much faster (like Newton-Raphson).

7. The Method of Relaxation

Sometimes, a Fixed Point Iteration \(x = g(x)\) is too slow or it diverges (fails). We can "fix" it using Relaxation. We use a "relaxation parameter" \(\lambda\) (lambda) to blend the old value with the new guess.

The formula for the relaxed iteration is:
\[x_{n+1} = (1 - \lambda)x_n + \lambda g(x_n)\]

By choosing the right \(\lambda\), we can force a diverging sequence to converge or make a slow one much faster. Think of it like a dimmer switch—you're adjusting the "strength" of the iteration to keep it under control.

Quick Summary Checklist

1. Bisection: Slow but steady. Always works if the root is bracketed.
2. Fixed Point: Uses \(x = g(x)\). Only works if \(|g'(x)| < 1\). Look for cobwebs and staircases!
3. Newton-Raphson: Very fast (2nd order). Fails if the gradient is zero.
4. Relaxation: A trick to make Fixed Point Iteration behave itself using \(\lambda\).
5. Accuracy: Always justify your final answer by checking the boundaries for a sign change.