Welcome to the World of Probability!

Ever wondered what the chances are of it raining during your weekend football match, or how likely you are to win a game of cards? That is exactly what Probability is all about! In this chapter, we will learn how to measure "chance" using numbers, how to predict outcomes, and how to use clever diagrams to make sense of even the trickiest situations. Don't worry if this seems a bit mathematical at first—we'll take it one step at a time!

1. The Language and Scale of Probability

In Statistics, we measure the likelihood of an event happening on a scale from 0 to 1 (or 0% to 100%).

The Probability Scale:

  • 0 (or 0%): Impossible. It absolutely cannot happen (like a pig flying).
  • 0.5 (or 50%): Evens. It is just as likely to happen as not to happen (like a fair coin landing on heads).
  • 1 (or 100%): Certain. It is guaranteed to happen (like the sun rising tomorrow).

We use words like unlikely (between 0 and 0.5) and likely (between 0.5 and 1) to describe events in between.

Quick Review: Probabilities can be written as fractions, decimals, or percentages. For example, a 1 in 4 chance is the same as \( 1/4 \), \( 0.25 \), or \( 25\% \).

2. Theoretical vs. Experimental Probability

There are two main ways to find a probability:

Theoretical Probability

This is what should happen in an ideal world. If you have a fair, six-sided dice, the probability of rolling a 4 is \( 1/6 \). We calculate it as:

\( P(\text{event}) = \frac{\text{Number of ways the event can happen}}{\text{Total number of possible outcomes}} \)

Experimental Probability (Relative Frequency)

Sometimes we don't know the theoretical probability, so we conduct an experiment. If you flip a bottle 50 times and it lands upright 10 times, the Relative Frequency (experimental probability) is \( 10/50 = 0.2 \).

Important Tip: The more trials you do (e.g., flipping the bottle 500 times instead of 50), the more reliable your result will be. As the number of trials increases, the experimental probability gets closer and closer to the theoretical probability.

Identifying Bias

We can use experiments to see if a tool is "fair" or "biased." If you roll a dice 60 times and it lands on '6' forty times, that's much higher than the expected 10 times. This suggests the dice might be biased (unfair).

Key Takeaway: Theoretical probability is what we expect; Experimental probability is what we actually see in real life.

3. Expected Frequency

If you know the probability of an event, you can predict how many times it will happen over a certain number of trials. This is called the Expected Frequency.

The Formula:
\( \text{Expected Frequency} = \text{Probability} \times \text{Total number of trials} \)

Example: If the probability of a seed germinating is 0.8, and you plant 200 seeds, how many do you expect to grow?
\( 0.8 \times 200 = 160 \text{ seeds} \)

4. Understanding Risk

In Statistics, "risk" is just another word for the probability of an event happening.

  • Absolute Risk: This is simply the probability of an event occurring for a specific group. Example: If 1 in 100 people get a cold, the absolute risk is 0.01.
  • Relative Risk: This compares the risk of two different groups. It's written as a ratio.

Relative Risk Formula:
\( \text{Relative Risk} = \frac{\text{Risk for Group A}}{\text{Risk for Group B}} \)

Example: If Group A has a pass rate of 0.6 and Group B has a pass rate of 0.2, the relative risk of passing is \( 0.6 / 0.2 = 3 \). This means people in Group A are 3 times more likely to pass than Group B.

5. Representing Outcomes: Diagrams

When dealing with multiple events, diagrams help us see every possibility without getting confused.

Sample Space Diagrams

These are usually grids used when you have two events, like rolling two dice. You list one event on the top and the other down the side to see all combinations.

Two-Way Tables

These show the frequency of two different categories. For example, a table could show students' genders (Boy/Girl) vs. whether they walk or take the bus to school.

Venn Diagrams

These use overlapping circles to show relationships between sets of data. Mutually Exclusive events are represented by circles that do not overlap because they cannot happen at the same time.

Tree Diagrams

These are great for "successive" events (things happening one after another). Each "branch" shows a possible outcome and its probability.

Common Mistake: When using a tree diagram, remember to multiply the probabilities along the branches to find the chance of a specific path (Event A AND Event B).

6. The Rules of Probability

Mutually Exclusive Events

Events that cannot happen at the same time (like a light being both ON and OFF).
The Addition Law: \( P(A \text{ or } B) = P(A) + P(B) \)

Exhaustive Events

A set of events is exhaustive if they cover all possible outcomes. The sum of probabilities for a set of exhaustive, mutually exclusive events is always 1.

Independent Events

Two events are independent if one does not affect the other (like rolling a dice and then flipping a coin).
The Multiplication Law: \( P(A \text{ and } B) = P(A) \times P(B) \)

Conditional Probability

This is when the probability of an event changes because something else has already happened. We use the notation \( P(B|A) \), which means "The probability of B, given that A has happened."

The Formula:
\( P(B|A) = \frac{P(A \text{ and } B)}{P(A)} \)

Memory Aid: Think of "Conditional" as "The Situation has Changed." For example, if you eat a chocolate from a box and don't put it back, the probability of picking a certain flavor next time has changed because there are fewer chocolates left!

Summary Checklist

Do you know the scale is 0 to 1?
Can you calculate expected frequency (\( P \times \text{Trials} \))?
Do you know that for independent events, you multiply the probabilities?
Can you identify a biased experiment by comparing it to theoretical results?

Don't worry if conditional probability feels tricky—just remember to ask yourself: "Did the total number of outcomes change?"