Welcome to the World of Measurement!
Ever tried to measure your height and gotten a slightly different result every time? Or perhaps you've noticed that even the most expensive kitchen scales "flicker" between numbers? In Physics, we acknowledge that no measurement is ever perfect. This chapter is all about understanding why those differences happen and how we can mathematically account for them. By the end of these notes, you'll be able to tell the difference between a "mistake" and an "error" and calculate exactly how "uncertain" your results are.
1. Random and Systematic Errors
In Physics, an "error" isn't a "blunder" (like tripping over a wire). Instead, errors are the inherent limitations of our equipment and our senses. We group them into two main categories:
Random Errors
Think of random errors like background noise. They cause measurements to fluctuate unpredictably around the true value. Sometimes your reading is a bit too high, and sometimes it's a bit too low.
- Causes: Human reaction time when using a stopwatch, small temperature changes in the room, or parallax error (viewing a scale from different angles each time).
- How to fix them: You can't eliminate them, but you can reduce their effect by taking multiple readings and calculating an average.
Systematic Errors
A systematic error is like a "bias." It causes all your measurements to be shifted in the same direction (always too high or always too low).
- Causes: A ruler with a chipped end, a voltmeter that isn't calibrated, or zero error (when a device shows a reading even when it should be at zero, like a scale showing 0.5g when nothing is on it).
- How to fix them: Averaging doesn't help here because every reading is biased the same way! You must recalibrate your equipment or subtract the zero error from every reading.
Quick Review Box:
- Random: Unpredictable. Fix by averaging.
- Systematic: Constant bias. Fix by recalibrating or subtracting zero error.
2. Accuracy vs. Precision
Students often use these words interchangeably, but in H2 Physics, they mean very different things!
Accuracy
Accuracy is how close your average value is to the true value. It is limited by systematic errors. If your equipment is perfectly calibrated, your results should be accurate.
Precision
Precision is how close your repeated measurements are to each other. It describes the "spread" of your data. It is limited by random errors. If you use a high-resolution instrument (like a micrometer instead of a ruler), you increase precision.
The Target Analogy:
Imagine you are throwing darts at a bullseye:
- High Precision, Low Accuracy: All darts land in a tight cluster, but far away from the bullseye (indicates a systematic error).
- Low Precision, High Accuracy: Darts are scattered all over the board, but their "average" position is the bullseye (indicates random errors).
- High Precision, High Accuracy: All darts are in a tight cluster right in the bullseye!
Key Takeaway: Accuracy = Truth. Precision = Consistency.
3. Expressing Uncertainties
When we write down a measurement, we usually write it as: \( \text{value} \pm \text{uncertainty} \). There are three ways to talk about this:
- Absolute Uncertainty (\( \Delta x \)): The actual range of the error (e.g., \( 5.0 \pm 0.1 \text{ cm} \)). Here, \( 0.1 \text{ cm} \) is the absolute uncertainty.
- Fractional Uncertainty: The ratio of the uncertainty to the measured value. \( \frac{\Delta x}{x} \)
- Percentage Uncertainty: The fractional uncertainty multiplied by 100. \( \frac{\Delta x}{x} \times 100\% \)
Example: A length is measured as \( 20.0 \pm 0.2 \text{ cm} \).
- Absolute Uncertainty = \( 0.2 \text{ cm} \)
- Fractional Uncertainty = \( \frac{0.2}{20.0} = 0.01 \)
- Percentage Uncertainty = \( 0.01 \times 100\% = 1\% \)
4. Combining Uncertainties (Propagation)
Don't worry if this seems tricky at first! Just follow these three simple "Golden Rules" for when you use your measurements in calculations.
Rule 1: Addition and Subtraction
When you add or subtract values, you add their absolute uncertainties.
If \( y = a + b \) OR \( y = a - b \), then:
\( \Delta y = \Delta a + \Delta b \)
Common Mistake: Students often think they should subtract uncertainties when subtracting values. Never subtract uncertainties! Doing more math always makes you more uncertain, not less.
Rule 2: Multiplication and Division
When you multiply or divide values, you add their fractional (or percentage) uncertainties.
If \( y = ab \) OR \( y = \frac{a}{b} \), then:
\( \frac{\Delta y}{y} = \frac{\Delta a}{a} + \frac{\Delta b}{b} \)
Rule 3: Powers
If a value is raised to a power \( n \), you multiply the fractional uncertainty by that power.
If \( y = a^n \), then:
\( \frac{\Delta y}{y} = n \times \frac{\Delta a}{a} \)
Did you know? This power rule is why measuring the diameter of a wire is so critical when calculating its area (\( A = \pi r^2 \)). Any error in the radius is doubled in the final area calculation!
5. Numerical Substitution Method
Sometimes the formulas get very messy. The syllabus allows you to find the uncertainty by "numerical substitution."
To do this, you calculate the maximum possible value your answer could have and the minimum possible value it could have using the ranges provided by your uncertainties.
Step-by-Step:
1. Calculate the standard value (using the measured numbers).
2. Calculate the "Max Value" by using the ends of the uncertainty ranges that make the result as big as possible.
3. The uncertainty is roughly the difference between the Max Value and your standard value.
Summary Checklist
- Can I distinguish between random and systematic errors?
- Do I understand that precision is about consistency and accuracy is about being close to the "true" value?
- When adding/subtracting, do I add absolute uncertainties?
- When multiplying/dividing, do I add percentage uncertainties?
- Do I remember to never subtract an uncertainty?
Great job! You've just mastered one of the most fundamental skills in Physics. Every experiment you do for the rest of your A-Levels will use these concepts!