Welcome to the World of Algorithm Classification!

In this chapter, we are diving into the heart of the Theory of Computation. You have already learned how to write algorithms, but have you ever wondered how computer scientists decide which algorithm is "better"? If you have two different ways to sort a list of names, how do you know which one will work faster when you have millions of names to handle?

We are going to learn how to measure the "efficiency" of algorithms using Big-O notation and discover that there are some problems so difficult that even the world's most powerful computers can't solve them in a reasonable amount of time. Don't worry if this seems a bit abstract at first—we will break it down step-by-step!

1. Comparing Algorithms

When we compare algorithms, we aren't looking at how many seconds they take to run on your laptop. Why? Because a faster processor will always make an algorithm seem "better." Instead, we look at Complexity.

Time and Space Efficiency

There are two main ways to measure an algorithm:
Time Complexity: How much time does it take to run as the size of the input increases?
Space Complexity: How much memory (RAM) does the algorithm need as the input grows?

The Key Issue: Problem Size \( (n) \)
The most important factor is the size of the problem, which we usually call \( n \). For example, if you are searching for a name in a list, \( n \) is the number of names in that list. As \( n \) gets bigger, we want to know how much more work the computer has to do.

Analogy: Imagine you are tidying your room. If \( n \) is the number of items on the floor, a "linear" algorithm means tidying 10 items takes 10 minutes, and 20 items takes 20 minutes. But if tidying 20 items suddenly took 400 minutes, you’d have a very inefficient algorithm!

Quick Takeaway: We compare algorithms based on how their performance changes relative to the problem size \( (n) \), not the speed of the hardware.

2. The Maths Behind Big-O

To understand how algorithms grow, we use a few mathematical functions. You don't need to be a math genius, you just need to recognize the "shape" of these functions:

Linear Function: \( y = 2x \) (A straight line. If the input doubles, the time doubles.)
Polynomial Function: \( y = 2x^2 \) (A curve. If the input doubles, the time quadruples!)
Exponential Function: \( y = 2^x \) (This grows extremely fast. It "explodes" even with small inputs.)
Logarithmic Function: \( y = \log_{10} x \) (The dream! As the input grows, the time increases very slowly.)

Did you know? A Permutation of a set is just an arrangement of its members. The number of ways to arrange \( n \) distinct objects is \( n \) factorial (\( n! \)). For example, if you have 3 books, there are \( 3 \times 2 \times 1 = 6 \) ways to arrange them. Factorials grow even faster than exponential functions!

3. Order of Complexity (Big-O Notation)

Big-O notation is a shorthand used to describe the worst-case scenario of an algorithm's time requirements. It tells us the "order of complexity."

Common Big-O Rankings (From Fastest to Slowest)

1. Constant Time \( O(1) \): The time taken is always the same, regardless of the input size. (e.g., checking if the first number in a list is even).
2. Logarithmic Time \( O(\log n) \): Very efficient. Often found in algorithms that "divide and conquer," like Binary Search.
3. Linear Time \( O(n) \): The time grows in direct proportion to the input. (e.g., Linear Search).
4. Polynomial Time \( O(n^2) \): The time is proportional to the square of the input. Often happens with nested loops. (e.g., Bubble Sort).
5. Exponential Time \( O(2^n) \): Extremely slow. Doubling the input size makes the time requirements massive.

How to derive complexity:
When looking at a program, we focus on the part that grows the fastest. If a program has a linear part and a squared part, we say it is \( O(n^2) \) because as \( n \) gets very large, the \( n^2 \) part becomes the only thing that really matters.

Quick Review Box:
• \( O(1) \): Instant.
• \( O(n) \): Steady growth.
• \( O(n^2) \): Rapid growth.
• \( O(2^n) \): Dangerous growth!

4. Limits of Computation

Even though computers are getting faster every year, algorithmic complexity and hardware still impose limits on what we can compute. Some algorithms are so complex that they would take longer than the age of the universe to finish, even on a supercomputer!

Tractable vs. Intractable Problems

Tractable Problems: These are problems that can be solved in polynomial time (\( O(n^k) \)) or better. We consider these "doable" in a reasonable amount of time.
Intractable Problems: These are problems that do not have a polynomial-time solution (they might be exponential). They are technically "solvable," but practically impossible for large values of \( n \).

The Heuristic Approach:
When we face an intractable problem, we often use a Heuristic. This is a "rule of thumb" or a "good enough" method. It might not give the perfect, 100% correct answer, but it gives a satisfactory answer in a reasonable timeframe.
Example: A GPS finding the "best" route through a city with millions of possible paths. It uses heuristics to find a very good route quickly rather than calculating every single possible path.

5. The Halting Problem

Is there any problem that a computer simply cannot solve? Yes! These are called non-computable problems.

The most famous example is the Halting Problem.
Definition: The Halting Problem asks if it is possible to write a program that can look at any other program and its input and determine if that program will eventually stop (halt) or run forever in an infinite loop.

The Verdict: In 1936, Alan Turing proved that the Halting Problem is unsolvable. No such program can ever exist.
Why does this matter? It proves that there are fundamental limits to what can be computed. There are some questions that logic and computers simply cannot answer.

Common Mistake to Avoid: Don't confuse "Intractable" with "Unsolvable."
Intractable = Solvable, but takes too long.
Unsolvable (Non-computable) = Impossible for a computer to ever solve.

6. Turing Machines

A Turing Machine is a theoretical model of a computer. It helps us understand the limits of what can be computed.
It consists of:
• A finite set of states (shown in a state transition diagram).
• A finite alphabet of symbols.
• An infinite tape divided into squares.
• A read-write head that can move along the tape one square at a time.

A Universal Turing Machine is a machine that can simulate any other Turing Machine. It is the theoretical basis for the modern computer—a single machine that can run any program!

Key Takeaway: Turing machines provide a formal definition of what is "computable." If a Turing Machine can't do it, no computer can.

Great job! You've reached the end of the notes for Classification of Algorithms. Remember: Efficiency is all about growth, and Big-O is your tool to measure it!