Welcome to the World of Uncertainties!

In Physics, we love to measure things—the speed of light, the mass of an electron, or the length of a wire. But here is a secret: no measurement is ever perfect. Every time we use a tool, there is a tiny bit of doubt about the result. In this chapter, we are going to learn how to identify these "imperfections," how to talk about them like a pro, and how to do the math to show just how sure (or unsure) we are. Don't worry if this seems a bit "fuzzy" at first; by the end of these notes, you’ll be an expert at handling the limitations of the physical world!

1. Errors: Why Measurements Go Wrong

An error is simply the difference between the value you measured and the "true" value that exists in nature. We generally group these into two categories:

Random Errors

These are unpredictable. One measurement might be slightly too high, and the next slightly too low. They are like a scatter of dots on a target.

  • Causes: Human reaction time, slight fluctuations in room temperature, or a breeze affecting a sensitive balance.
  • How to fix: You can’t "remove" them entirely, but you can repeat the measurement and calculate a mean (average). This causes the random "highs" and "lows" to cancel each other out.

Systematic Errors

These are consistent. If your measurement is always "off" by the same amount in the same direction, it’s systematic. Imagine a clock that is always 5 minutes fast.

  • Causes: Poorly calibrated equipment or a bad experimental setup. A common type is the Zero Error (e.g., a weighing scale that shows 0.1g before you even put anything on it).
  • How to fix: You must re-calibrate your equipment or subtract the known error from every reading. Repeating and averaging does not help here because the error is the same every time!

Key Takeaway: Random errors are "scattered" (fixed by averaging); Systematic errors are "shifted" (fixed by better equipment/calibration).

2. The "A-Team" of Vocabulary

To be a physicist, you need to use specific words to describe your data. These are often confused, so let’s break them down:

  • Accuracy: How close your measurement is to the true value. If the true gravity is \(9.81 \, m/s^2\) and you get \(9.80 \, m/s^2\), you are very accurate!
  • Precision: How close your repeated measurements are to each other. If you measure gravity three times and get \(9.11, 9.12,\) and \(9.11\), your results are precise (consistent), but not accurate (wrong).
  • Resolution: The smallest change in a quantity that an instrument can detect. Example: A standard ruler has a resolution of 1 mm. A digital caliper might have a resolution of 0.01 mm.
  • Repeatability: Can you do the same experiment again with the same equipment and get the same results?
  • Reproducibility: Can someone else (or you using different equipment) follow your method and get the same results?

Memory Tip: Think of a dartboard. Accuracy is hitting the bullseye. Precision is hitting the same spot on the wall three times in a row, even if that spot isn't the bullseye!

3. Representing Uncertainty

Uncertainty is our way of putting a number on our doubt. There are three ways to write it:

A. Absolute Uncertainty

The actual range of the "doubt," usually written as \( \pm \).
Example: A length of \(20 \pm 1 \, mm\). The absolute uncertainty is \(1 \, mm\).

B. Fractional Uncertainty

The ratio of the uncertainty to the measured value.
\( \text{Fractional Uncertainty} = \frac{\text{Absolute Uncertainty}}{\text{Measured Value}} \)

C. Percentage Uncertainty

The most common way to compare errors.
\( \text{Percentage Uncertainty} = \frac{\text{Absolute Uncertainty}}{\text{Measured Value}} \times 100\% \)

Quick Review Box:
If you measure \(5.0 \, V\) with an uncertainty of \( \pm 0.1 \, V \):
Absolute = \(0.1 \, V\)
Fractional = \(0.1 / 5.0 = 0.02\)
Percentage = \(0.02 \times 100 = 2\%\)

4. Combining Uncertainties (The Math Rules)

What happens if you use two measurements to calculate a third thing (like using mass and volume to find density)? The uncertainties "add up." Follow these simple rules:

Rule 1: Adding or Subtracting

When you add or subtract values, you ADD the ABSOLUTE uncertainties.
Example: You have two rods. Rod A is \(10 \pm 1 \, cm\). Rod B is \(5 \pm 1 \, cm\). The total length is \(15 \pm 2 \, cm\).

Rule 2: Multiplying or Dividing

When you multiply or divide values, you ADD the PERCENTAGE uncertainties.
Don't worry if this seems tricky! Just convert everything to percentages first, add them, and then (if needed) convert back to a number at the end.

Rule 3: Raising to a Power

If a value is squared, cubed, or square-rooted, MULTIPLY the percentage uncertainty by the power.
Example: If the uncertainty in the radius \(r\) is \(3\%\), the uncertainty in the area of a circle (\(\pi r^2\)) is \(3\% \times 2 = 6\%\).

Key Takeaway: Absolute for Add/Subtract; Percentage for Multiply/Divide/Powers!

5. Uncertainties on Graphs

In your practical exams, you will often draw graphs. We don't just draw dots; we draw Error Bars.

  • Error Bars: These are little "whiskers" drawn above, below, or to the sides of a data point to show the absolute uncertainty.
  • Lines of Best Fit: This is the "average" line that passes through as many error bars as possible.
  • Lines of Worst Fit: These are the steepest or shallowest possible lines you can draw that still pass through all the error bars.

Finding Uncertainty in Gradients

To find the uncertainty in your gradient (slope), use this "recipe":
1. Draw your Best Fit Line and calculate its gradient (\(m_{best}\)).
2. Draw the Worst Fit Line (usually the one that goes from the bottom of the first error bar to the top of the last error bar) and calculate its gradient (\(m_{worst}\)).
3. \( \text{Uncertainty} = |m_{best} - m_{worst}| \)

Note: The same method works for the y-intercept! Just find the difference between the intercept of your best line and your worst line.

6. Significant Figures and Uncertainty

There is a special link between how you round your numbers and how much you trust them.

The Rule: The number of significant figures in your final answer should be the same as the number of significant figures in the measurement with the least precision. Also, your uncertainty should generally be rounded to one significant figure, and your value should be rounded to match that same decimal place.

Example: You shouldn't write \(5.23421 \pm 0.1 \, m\). If you are only sure to the nearest \(0.1\), the correct way to write it is \(5.2 \pm 0.1 \, m\).

Did you know? Using more significant figures than your uncertainty allows is like saying "I'm 100% sure it's exactly 5.23421," when your ruler can only measure 5.2! It's actually scientifically "lying."

Final Summary Checklist

  • Can you tell the difference between random and systematic errors? (Section 1)
  • Do you know the difference between precision and accuracy? (Section 2)
  • Can you calculate percentage uncertainty? (Section 3)
  • Do you know when to add absolute vs. percentage uncertainties? (Section 4)
  • Can you explain how to find the uncertainty in a gradient using a "worst fit" line? (Section 5)