Welcome to the World of Errors!

In your previous math classes, you probably spent most of your time looking for the "perfect" answer. However, in the real world—and in Further Mathematics—many problems are impossible to solve exactly. Instead, we use Numerical Methods to find "good enough" answers.

This chapter is all about understanding Errors. We will learn how to measure how far off our answers are, why those errors happen, and how we can actually use that knowledge to make our answers even better. Don't worry if this seems a bit abstract at first; once you see the patterns, it’s like being a detective for numbers!


1. Absolute and Relative Error

Before we can fix errors, we need to know how to measure them. In Numerical Methods, we distinguish between the Exact Value (what the answer actually is) and the Approximate Value (what our calculation says).

Key Definitions:

Let \(x\) be the exact value and \(X\) be the approximate value.

  • Absolute Error: This is simply the difference between the approximation and the exact value.
    Formula: \( \text{Absolute Error} = X - x \)
  • Relative Error: This compares the error to the size of the original number. It’s often more useful because it shows how "significant" the mistake is.
    Formula: \( \text{Relative Error} = \frac{X - x}{x} \)

Analogy: Imagine you are measuring things.
1. If you measure a 10cm pencil and you are off by 1cm, that's a big deal! (High relative error).
2. If you measure the distance to the Moon and you are off by 1cm, nobody cares! (Tiny relative error).
Even though the Absolute Error (1cm) is the same in both cases, the Relative Error tells the real story.

Quick Review Box:
- Absolute Error = \(X - x\)
- Relative Error = \(\frac{\text{Absolute Error}}{\text{Exact Value}}\)
Note: Absolute error can be a negative or positive number. Some textbooks use the magnitude (positive value), but the MEI syllabus allows for signed quantities.

Key Takeaway: Absolute error is the "raw" mistake; relative error tells you how much that mistake matters compared to the total size.


2. Representing Numbers: Rounding vs. Chopping

Computers and calculators have limited precision. They can't store a number with infinite digits (like \(\pi\)), so they have to cut them off somewhere. They usually do this in two ways:

Rounding

This is what you are used to. If the next digit is 5 or more, you round up.
Example: 7.86 rounded to 1 decimal place is 7.9.

Chopping

This is much more brutal. You simply delete all digits after a certain point, no matter what they are.
Example: 7.86 chopped to 1 decimal place is 7.8.

Did you know? Computers often use chopping because it is computationally faster than rounding, even though it creates slightly larger errors on average.

Maximum Error Bounds

When we round or chop, we create a maximum possible error:

  • If we round to 1 decimal place, the maximum absolute error is 0.05.
  • If we chop to 1 decimal place, the maximum absolute error is 0.1.

Common Mistake to Avoid: Students often forget that chopping produces a larger maximum error than rounding. Always check which one the question is asking for!

Key Takeaway: Chopping is like a "floor" function—it always moves the number toward zero or down, while rounding moves it to the nearest value.


3. Error Propagation

Errors are like rumors—they spread and grow! When you perform arithmetic with approximate numbers, the error "propagates" into the result.

Errors in Sums and Differences

When you add or subtract numbers, the Absolute Errors add up.
If you have 200 numbers, and each is chopped to the nearest integer, the maximum error in any one number is 0.9.
The Maximum Error in the sum is \(200 \times 0.9 = 180\).
The Average Error in one number is 0.45, so the Expected Error for the sum is \(200 \times 0.45 = 90\).

The "Danger Zone": Subtracting Nearly Equal Quantities

This is a favorite exam topic! If you subtract two numbers that are very close to each other (e.g., \(1.234567 - 1.234566\)), you are left with a tiny result. However, the error in those original numbers stays the same. Because the result is now so small, the Relative Error becomes huge. This can make the entire calculation useless.

Key Takeaway: Avoid subtracting nearly equal numbers if possible. It makes your errors "explode" in relative terms.


4. Convergence and Order of Method

When we use iterative processes (like finding a root), we want to know two things: 1. Does it get closer to the answer (Convergence)? 2. How fast does it get there (Order)?

Order of Convergence for Sequences

If we have a sequence of errors \( \epsilon_n \), the order of convergence \(k\) is the power that relates one error to the next:
\( \epsilon_{n+1} \approx C \times (\epsilon_n)^k \)
- If \(k=1\), it is First Order (e.g., Fixed Point Iteration).
- If \(k=2\), it is Second Order (e.g., Newton-Raphson). This is much faster!

Order of a Method (Step Length \(h\))

For methods using a step length \(h\) (like Numerical Differentiation), we look at how the error changes as we change \(h\):
\( \text{Error} \propto h^k \)
- First Order: Halving \(h\) halves the error.
- Second Order: Halving \(h\) reduces the error by a factor of 4 (\(2^2\)).

Memory Aid: "Order" is just the power (\(k\)). Higher power = faster reduction of error!

Key Takeaway: Newton-Raphson is generally second order, meaning the number of correct decimal places roughly doubles with every step!


5. Improving Solutions (Extrapolation)

If we know the ratio of differences between successive approximations, we can "predict" the error and remove it. This is called extrapolation.

Step-by-Step Process:

  1. Calculate approximations using different step sizes (e.g., \(h\), \(h/2\), \(h/4\)).
  2. Find the differences between these approximations.
  3. Calculate the ratio of these differences.
  4. Use this ratio to estimate the "limit" as the error goes to zero.

Example: If you are approximating an integral and halving the step size \(h\) consistently reduces the difference between results by a factor of 4, you can use that pattern to "leap" to a much more accurate answer using extrapolation to infinity.

Quick Review Box:
- Ratio of differences: Helps identify the order of the method.
- Extrapolation: Uses the error pattern to find a better estimate.
- Precision: Always justify your level of precision in the final answer based on the consistency of the differences.

Final Encouragement: Errors might seem annoying, but they follow mathematical rules just like everything else. Once you master these formulas, you'll be able to tell exactly how much you can trust your numerical answers!