Welcome to the World of Errors!

In Further Mathematics, we often use Numerical Methods to solve problems that are impossible to solve exactly with a pen and paper. Because we are using approximations, we need to know exactly how "wrong" we might be. That’s what this chapter is all about! Don't worry if this seems a bit "fussy" at first; once you see the patterns, it becomes a very logical part of your toolkit.

In this chapter, we will learn how to measure errors, how they grow when we do calculations, and how to use them to actually make our answers better.


1. Absolute and Relative Error

When we estimate a value, there is a difference between our guess and the truth. We use two main ways to measure this:

The Basics

Let \(x\) be the exact value and \(X\) be the approximate value.

  • Absolute Error: This is simply the difference between the approximation and the exact value.
    Formula: \( \text{Absolute Error} = X - x \)
    Note: In MEI, this can be a positive or negative number. If it's positive, our approximation is too high!
  • Relative Error: This compares the error to the size of the actual number.
    Formula: \( \text{Relative Error} = \frac{X - x}{x} \)
    (Sometimes we use \(\frac{X - x}{X}\) if the exact value isn't known).

Real-World Analogy: Imagine you are measuring a piece of string.
If you are 1cm off when measuring a 10cm string, that’s a big deal (10% error).
If you are 1cm off when measuring a 1km cable, nobody cares (0.001% error).
The Absolute Error is the same (1cm), but the Relative Error tells the real story!

Key Takeaway: Absolute error is the "gap," relative error is the "percentage-style" comparison.


2. Representing Numbers: Rounding vs. Chopping

Computers and calculators have limited precision. They can't store infinite decimals like \(\pi\), so they have to cut them off.

Two Ways to Cut Numbers

  1. Rounding: You go to the nearest value. (e.g., 7.86 rounded to 1 decimal place is 7.9).
  2. Chopping (Truncation): You simply ignore everything after a certain point. (e.g., 7.86 chopped to 1 decimal place is 7.8).
Maximum Error Quick Guide

If you round a number to \(n\) decimal places, the maximum absolute error is \(0.5 \times 10^{-n}\).
If you chop a number to \(n\) decimal places, the maximum absolute error is \(1 \times 10^{-n}\).

Quick Review Box:
Rounding to 2 d.p. \(\rightarrow\) Max Error = \(0.005\)
Chopping to 2 d.p. \(\rightarrow\) Max Error = \(0.01\)

Key Takeaway: Chopping usually results in a larger error than rounding!


3. Error Propagation

This is a fancy way of saying: "If my starting numbers have errors, what happens to the error in my final answer?"

Arithmetic Operations

  • Sums and Differences: Errors generally add up. If you add 200 numbers together, and each has a max error of 0.05, your total error could be as high as \(200 \times 0.05 = 10\).
  • Subtraction Danger! A common mistake occurs when subtracting nearly equal quantities. If you subtract 1.23456 from 1.23457, you are left with 0.00001. You’ve lost almost all your "significant figures," and any tiny error in the original numbers is now huge compared to your tiny result.

Functions of Errors

If we have an error in \(x\), how does it affect \(f(x)\)?
We can estimate this using the gradient:
\( \text{Error in } f(x) \approx f'(x) \times (\text{Error in } x) \)

Did you know? This is why the order of operations matters in programming. Doing a subtraction at the end of a long calculation can sometimes "explode" the error!

Key Takeaway: Watch out for subtracting similar numbers—it's the most common place for errors to ruin a calculation.


4. Convergence and Order of a Method

When we use an iterative method (like Newton-Raphson) to find a root, we want to know how fast we are getting to the answer. This is called the Order of Convergence.

The Error Relationship

Let \(\epsilon_n\) be the error at step \(n\). A method has order \(k\) if:
\( \epsilon_{n+1} \approx C(\epsilon_n)^k \)

  • First Order (\(k=1\)): The error at each step is a constant fraction of the previous error. (e.g., Bisection method). It’s steady but slow.
  • Second Order (\(k=2\)): The error is roughly the square of the previous error. (e.g., Newton-Raphson). This is fast! If your error is 0.1, the next might be 0.01, then 0.0001. The number of correct decimal places roughly doubles each time.

Key Takeaway: Higher order = faster convergence (usually).


5. Improving Solutions (Extrapolation)

If we know our method has a specific "order," we can actually use that information to predict the "perfect" answer. This is called extrapolation.

Ratio of Differences

By looking at a sequence of approximations (let's call them \(A_1, A_2, A_3\)), we calculate the differences between them.
If the ratio of these differences stays consistent, we can estimate the error remaining and subtract it to get an improved solution.

Step-by-Step for Exams:
1. Calculate the differences between successive approximations.
2. Find the ratio of these differences.
3. Use that ratio to "jump" to the value at infinity (the true limit).

Key Takeaway: Error analysis isn't just about pointing out mistakes; it’s a tool to find the truth more accurately!


Quick Summary for Revision

  • Absolute Error = \(X - x\) (the gap).
  • Relative Error = \(\frac{X - x}{x}\) (the comparison).
  • Rounding Max Error = half the last place value.
  • Chopping Max Error = the full last place value.
  • Subtraction of similar numbers is a major source of precision loss.
  • Newton-Raphson is generally second-order (very fast).

Don't worry if the formulas for sums and products feel complex—just remember to test the "worst-case scenario" (maximum possible value minus minimum possible value) and you'll usually find the max error easily!