Welcome to Data Representation!
Ever wondered how a computer, which is basically a collection of tiny electronic switches, can show you a high-definition movie or let you play complex games? It all comes down to Data Representation. In this chapter, we will learn how computers "count" and how they turn simple numbers into the text and symbols we see every day. Don't worry if it seems like a lot of math at first—once you see the patterns, it’s as easy as 1, 2, 3 (or rather, 0 and 1)!
1. The Three Main Number Systems
In our daily lives, we use the Decimal system. But computers prefer Binary, and programmers often use Hexadecimal to make things easier to read. Let’s break them down:
A. Decimal (Base-10)
This is what you’ve used since kindergarten. It uses 10 digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
Analogy: Think of this as counting on your ten fingers.
B. Binary (Base-2)
Computers use electricity, which is usually either "On" or "Off." Therefore, they only use 2 digits: 0 and 1. Each digit is called a bit (short for Binary Digit).
Analogy: A light switch. It’s either up (1) or down (0).
C. Hexadecimal (Base-16)
This system uses 16 symbols. Since we ran out of normal numbers after 9, we use letters:
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A (10), B (11), C (12), D (13), E (14), F (15).
Quick Review: Why use Hexadecimal?
Binary strings like 10110110 are hard for humans to read. Hexadecimal acts as a "shorthand." One Hex digit can represent four Binary bits! It’s much easier for a programmer to remember B6 than 10110110.
Key Takeaway: Binary is for the computer's hardware; Decimal is for humans; Hexadecimal is the "middle ground" for programmers.
2. Converting Between Bases
Converting numbers might seem intimidating, but it’s just following a recipe. Let’s look at the most common conversions you’ll need for the 9569 syllabus.
A. Binary to Decimal (The "Weighting" Method)
Each position in a binary number has a "weight" that is a power of 2, starting from the right ( \( 2^0 \) ).
Step-by-Step Example: Convert 1011 to Decimal
1. Write out the powers of 2 from right to left: \( 8, 4, 2, 1 \)
2. Place your binary digits underneath:
(8 × 1) + (4 × 0) + (2 × 1) + (1 × 1)
3. Add them up: \( 8 + 0 + 2 + 1 = 11 \)
4. So, \( 1011_2 = 11_{10} \)
B. Decimal to Binary (The "Divide by 2" Method)
Keep dividing the number by 2 and keep track of the remainders.
Step-by-Step Example: Convert 13 to Binary
1. 13 ÷ 2 = 6, Remainder 1
2. 6 ÷ 2 = 3, Remainder 0
3. 3 ÷ 2 = 1, Remainder 1
4. 1 ÷ 2 = 0, Remainder 1
5. Read the remainders from bottom to top: 1101.
C. Binary to Hexadecimal (The "Magic 4" Rule)
This is the easiest conversion! Group the binary bits into sets of four (starting from the right).
Step-by-Step Example: Convert 11101011 to Hex
1. Split into two groups: 1110 and 1011
2. Convert each group to decimal: \( 1110 = 14 \), \( 1011 = 11 \)
3. Convert those to Hex letters: \( 14 = E \), \( 11 = B \)
4. Result: EB
Common Mistake to Avoid!
When converting Decimal to Hex, don't forget that 10 is A, not "10". If you write "10" in a Hex code, the computer thinks you mean "1" and "0" separately!
Key Takeaway: Always remember the "Place Values" for Binary: 128, 64, 32, 16, 8, 4, 2, 1. If the bit is 1, add the number. If it's 0, skip it!
3. Applications in Computing
Why are we learning this? Because these systems are everywhere in technology!
Binary: Used in the Central Processing Unit (CPU) and Memory (RAM). At the lowest level, everything is a 1 or a 0.
Decimal: Used for User Inputs. When you type "25" into a calculator, the computer takes that decimal input and converts it to binary to do the math.
Hexadecimal: Used for HTML Color Codes (e.g., #FF0000 is red), MAC Addresses for networking, and Memory Addresses when debugging code.
Did you know?
There are exactly 10 types of people in the world: those who understand binary, and those who don't! (Wait... that "10" is actually binary for "2"!)
4. Character Encoding: ASCII and Unicode
Numbers are great, but how do we get the letter 'A' or an emoji? We use Character Sets, which are basically giant look-up tables that map a number to a character.
A. ASCII (American Standard Code for Information Interchange)
ASCII is an older, simpler system. It uses 7 bits (or 8 bits in Extended ASCII) to represent characters.
- Example: In ASCII, the number 65 represents 'A', and 97 represents 'a'.
- Limitation: Since it only uses a few bits, it can only represent English letters, numbers, and some symbols. It can't handle other languages like Chinese or Arabic.
B. Unicode
Unicode was created to solve ASCII's limitations. It uses more bits (usually 16 or 32), allowing for over a million possible characters!
- Uses: It covers almost every written language in the world, plus mathematical symbols and Emojis 🚀.
- Relationship: Unicode is designed to be "backwards compatible" with ASCII. This means the first 128 characters of Unicode are exactly the same as ASCII.
Quick Review: ASCII vs. Unicode
ASCII: Small, efficient, but English-only.
Unicode: Large, universal, used for modern web pages and international software.
Key Takeaway: Character encoding is the "bridge" between the binary numbers a computer understands and the human language we read on the screen.
Final Summary Checklist
Before you move on, make sure you can:
- [ ] Explain the difference between Base-2, Base-10, and Base-16.
- [ ] Convert a positive integer from Decimal to Binary and back.
- [ ] Convert between Binary and Hexadecimal using the 4-bit grouping method.
- [ ] Give an example of where Hexadecimal is used (like CSS colors).
- [ ] Explain why Unicode is preferred over ASCII for global applications.
Keep practicing those conversions! It’s just like a puzzle—the more you do, the faster you’ll get!