Welcome to the World of Binary!
Ever wondered how a computer, which is essentially a collection of tiny electronic switches, can understand complex things like your favorite video game or a math homework assignment? It all comes down to the binary number system. In this chapter, we are going to learn the "language of 1s and 0s" and see how computers handle everything from simple counting to complex decimals.
3.5.3.1 Unsigned Binary
The simplest way a computer counts is using unsigned binary. "Unsigned" simply means the number is always positive (it has no plus or minus sign).
In a binary system with n bits (places for 1s or 0s):
- The minimum value is always 0 (all bits are 0).
- The maximum value is calculated using the formula: \( 2^n - 1 \).
Example: If you have 8 bits (a byte), the maximum value you can represent is \( 2^8 - 1 \), which is \( 256 - 1 = 255 \). So, an 8-bit unsigned integer can represent any whole number from 0 to 255.
Quick Review Box:
Unsigned = Positive numbers only.
Range = \( 0 \) to \( 2^n - 1 \).
3.5.3.2 Unsigned Binary Arithmetic
Adding and multiplying in binary is very similar to how you did it in primary school with decimal numbers, just with fewer digits to worry about!
Binary Addition Rules:
- \( 0 + 0 = 0 \)
- \( 0 + 1 = 1 \)
- \( 1 + 1 = 10 \) (Write 0, carry 1)
- \( 1 + 1 + 1 = 11 \) (Write 1, carry 1)
Binary Multiplication:
Binary multiplication is even easier because you are only ever multiplying by 0 or 1. It’s just a series of shifts and additions. If you multiply by 1, you copy the number. If you multiply by 0, the result is 0.
Did you know?
Multiplying a binary number by 2 is as simple as adding a 0 to the end of it! This is called a logical shift left.
3.5.3.3 Signed Binary using Two's Complement
What if we need to represent a negative number? We use a system called Two's Complement. In this system, the most significant bit (the one furthest to the left) represents a negative value.
Example for 8 bits: Instead of the columns being 128, 64, 32... the columns are -128, 64, 32, 16, 8, 4, 2, 1.
How to convert a positive number to negative:
Don’t worry if this seems tricky! Just follow these two simple steps:
- Flip all the bits (change 1s to 0s and 0s to 1s).
- Add 1 to the result.
Range of Two's Complement:
With n bits, the range is: \( -2^{n-1} \) to \( +2^{n-1} - 1 \).
For 8 bits, this is -128 to +127.
Key Takeaway: Two's complement allows us to perform subtraction by simply adding a negative number. To do \( 5 - 3 \), the computer actually calculates \( 5 + (-3) \).
3.5.3.4 Numbers with a Fractional Part
Computers don't just work with whole numbers; they need decimals too! There are two ways to do this:
1. Fixed Point
The binary point is "fixed" at a specific position. For example, in an 8-bit number, we might decide 4 bits are for the whole number and 4 bits are for the fraction.
Analogy: Imagine a price tag where the decimal is always after the second digit ($99.99). It's simple, but you can't represent very large or very tiny numbers easily.
2. Floating Point
This is like Scientific Notation (e.g., \( 6.02 \times 10^{23} \)). It uses two parts:
- Mantissa: The actual digits of the number.
- Exponent: Tells us where the binary point "floats" to.
In your exam, both parts will be represented using Two's Complement.
Key Concept:
- More bits in the Mantissa = More Precision (more decimal places).
- More bits in the Exponent = Larger Range (bigger or smaller numbers).
3.5.3.9 Normalisation
To make sure we are getting the most precision out of our bits, we normalise floating point numbers. A normalised number always starts with:
- 01 for a positive number.
- 10 for a negative number.
This ensures that we don't waste bits on leading zeros (or ones for negative numbers).
3.5.3.5 & 3.5.3.6 Errors in Binary
Computers are fast, but they aren't always perfect at math! Some decimal numbers (like 0.1) cannot be represented exactly in binary. This leads to rounding errors.
Measuring Errors:
- Absolute Error: The actual difference between the real number and the binary representation. \( |True Value - Represented Value| \)
- Relative Error: The absolute error divided by the true value. This is often more useful because it shows how significant the error is compared to the size of the number.
Example: If you lose $1 when you have $10, that's a huge error (10%). If you lose $1 when you have $1,000,000, that's a tiny error (0.0001%). Relative error tells us that second error is much more acceptable!
3.5.3.7 Underflow and Overflow
These happen when a calculation goes outside the limits of what the bits can hold.
- Overflow: The result is too large to fit in the allocated bits (like trying to fit 1000 into 3 digits).
- Underflow: The result is too small (too close to zero) to be represented. This often happens with floating point numbers when the exponent becomes too small.
3.5.3.8 Comparing Fixed and Floating Point
Here is a quick summary to help you remember the differences:
Fixed Point:
- Advantage: Faster for the computer to process (simpler math).
- Disadvantage: Very limited range of numbers.
Floating Point:
- Advantage: Massive range (can represent the size of an atom or the distance to a star).
- Disadvantage: Slower to process; can suffer from more complex precision issues.
Summary Takeaway: Choose Fixed Point for speed and simple ranges (like money); choose Floating Point for complex scientific calculations where you need huge ranges.