Welcome to the World of Bernoulli and Binomial Distributions!

In this chapter, we are going to explore some of the most common ways to measure probability in real life. Have you ever wondered what the chances are of getting exactly 3 heads if you flip a coin 10 times? Or how many items in a factory line might be faulty? These are the types of questions the Bernoulli and Binomial distributions help us answer.

Don’t worry if statistics feels a bit heavy at first. We’ll break everything down into simple "Yes or No" steps!


1. The Bernoulli Distribution: The Power of One

The Bernoulli distribution is the simplest building block in statistics. It describes a single experiment (called a trial) that has exactly two possible outcomes: Success or Failure.

Think of it like this: You take a single shot at a basketball hoop. You either make the basket (Success) or you don't (Failure). There is no "in-between."

Key Components

We use the following symbols to describe a Bernoulli trial:

1. \(p\): The probability of Success (usually represented by the value 1).
2. \(1 - p\): The probability of Failure (usually represented by the value 0). We sometimes call this \(q\).
3. The total probability must always add up to 1: \(p + (1 - p) = 1\).

Mean and Variance

Even for one single trial, we can calculate the average (Mean) and the spread (Variance):

Mean: \(E(X) = p\)
Variance: \(Var(X) = p(1 - p)\)

Quick Tip: If the chance of winning a game is 0.7, then on average, for every 1 game you play, you "win" 0.7 of it. It sounds weird, but that's what the Mean tells us!

Key Takeaway: A Bernoulli distribution is for one single trial with only two outcomes.


2. The Binomial Distribution: Repeating Success

What happens if we take that single Bernoulli trial and repeat it many times? That’s where the Binomial distribution comes in. It is simply the sum of independent Bernoulli trials.

Example: Flipping a coin 10 times and counting how many "Heads" you get. Each flip is a Bernoulli trial; all 10 flips together make a Binomial distribution.

The "BINS" Criteria

To use the Binomial distribution, the situation must meet these four conditions (remember the word BINS):

B - Binary: Only two outcomes (Success/Failure).
I - Independent: One trial does not affect the next (e.g., the coin doesn't "remember" the last flip).
N - Number: There is a fixed number of trials (\(n\)).
S - Success: The probability of success (\(p\)) stays the same for every trial.

Did you know? If you are picking marbles from a bag and not putting them back, it is not Binomial because the probability changes each time!


3. Calculating Binomial Probabilities

To find the probability of getting exactly \(x\) successes in \(n\) trials, we use this formula:

\(P(X = x) = \binom{n}{x} p^x (1 - p)^{n-x}\)

Breaking Down the Formula

1. \(\binom{n}{x}\): This is the combination part. It tells us how many different ways we can arrange the successes. You can find this on your calculator using the \(nCr\) button.
2. \(p^x\): The probability of success raised to the number of successes we want.
3. \((1 - p)^{n-x}\): The probability of failure raised to the number of failures left over.

Example: If you throw a fair die 5 times, what is the probability of getting exactly two 6s?
\(n = 5\) (total throws)
\(x = 2\) (desired 6s)
\(p = 1/6\) (chance of a 6)
\(1 - p = 5/6\) (chance of not a 6)

Calculation: \(\binom{5}{2} \times (1/6)^2 \times (5/6)^3\)

Common Mistake to Avoid

Make sure the powers (\(x\) and \(n-x\)) always add up to \(n\). In the example above, \(2 + 3 = 5\). If they don't add up, you've missed a trial!

Key Takeaway: Use \(\binom{n}{x}\) to find the number of ways, then multiply by the probabilities of success and failure.


4. Mean, Variance, and Standard Deviation

For a Binomial distribution \(X \sim B(n, p)\), calculating the "average" outcome and the "spread" is very straightforward:

Mean (Expected Value): \(E(X) = np\)
Analogy: If you flip a coin 100 times (\(n=100\)) and the chance of heads is 0.5 (\(p=0.5\)), you expect to get \(100 \times 0.5 = 50\) heads.

Variance: \(Var(X) = np(1 - p)\)
This measures how much the results vary from the mean.

Standard Deviation: \(\sigma = \sqrt{np(1 - p)}\)
This is simply the square root of the variance.


5. Using Statistical Tables

Sometimes, calculating probabilities one by one is too slow—especially for "at most" or "less than" questions (e.g., \(P(X \leq 4)\)).

In your exam, you may be given cumulative binomial tables. These tables tell you the probability of \(X\) being less than or equal to a certain value.

Tricks for Tables:
1. If the question asks for \(P(X \leq 3)\), look up 3 directly in the table.
2. If the question asks for \(P(X < 3)\), this is the same as \(P(X \leq 2)\).
3. If the question asks for \(P(X \geq 3)\), use the "complement" rule: \(1 - P(X \leq 2)\).

Quick Review Box
Bernoulli: 1 trial. Mean = \(p\). Var = \(p(1-p)\).
Binomial: \(n\) trials. Mean = \(np\). Var = \(np(1-p)\).
Check "BINS": Binary, Independent, Number fixed, Success constant.
Total Probability: Always equals 1.


Don't worry if this seems tricky at first! The more you practice identifying \(n\), \(p\), and \(x\), the easier it becomes. You've got this!