Welcome to Linear Programming!

In this chapter of Decision Mathematics 1, we explore one of the most powerful tools in modern mathematics. Linear Programming (LP) is all about "optimisation"—finding the best way to do something within a set of limits. Whether it's a factory trying to maximize profit while using limited materials, or a logistics company trying to minimize fuel costs, LP is the secret sauce behind the scenes.

Don't worry if this seems a bit abstract at first. We are basically taking real-world problems, turning them into algebra, and using set methods (algorithms) to find the perfect answer. Let's dive in!

1. Formulating a Problem

Before we can solve a problem, we have to "translate" it from English into Mathematics. This is called Formulation.

The Ingredients of an LP Problem

1. Decision Variables: These are the things you can control. Usually, we call them \(x, y, z\), etc. For example, \(x = \) number of cakes to bake.
2. Objective Function: This is your goal. It will start with "Maximise" (for profit/output) or "Minimise" (for cost/time). It looks like: \(P = 5x + 3y\).
3. Constraints: These are your limits (flour, time, money). They are written as inequalities, such as \(2x + y \le 10\).
4. Non-negativity Constraint: This is a small but vital point! In the real world, you can't produce -5 cakes. So, we always include \(x, y \ge 0\).

Slack, Surplus, and Artificial Variables

To use the advanced algorithms later, we need to turn our inequalities (like \(\le\)) into equations (like \(=\)). We do this by adding special variables:

Slack Variables (\(s\)): Used for \(\le\) constraints. Think of "Slack" as the "leftover" resource. If you have 10kg of flour and use 8kg, the slack is 2kg.
Example: \(3x + 2y \le 20\) becomes \(3x + 2y + s_1 = 20\).

Surplus Variables (\(s\)): Used for \(\ge\) constraints. This is the "extra" amount you have above the minimum requirement.
Example: \(x + y \ge 5\) becomes \(x + y - s_2 = 5\). But wait! If \(x\) and \(y\) are 0, then \(-s_2 = 5\), meaning \(s_2 = -5\). We can't have negative variables in our algorithm! This leads us to...

Artificial Variables (\(a\) or \(t\)): These are "math placeholders" added to \(\ge\) or \(=\) constraints to give the algorithm a valid starting point. They have no real-world meaning and must be removed by the end of the calculation.
Example: \(x + y \ge 5\) becomes \(x + y - s_2 + t_1 = 5\).

Quick Review:
- Slack: Adds "unused" stuff (\(+\) sign) for \(\le\).
- Surplus: Subtracts "extra" stuff (\(-\) sign) for \(\ge\).
- Artificial: A "fake" variable (\(+\) sign) to help the math start for \(\ge\) or \(=\).

2. The Graphical Method

If we only have two variables (\(x\) and \(y\)), we can draw the problem on a graph. This is a great way to visualize what's happening.

Step-by-Step Process

1. Draw the constraints: Treat the inequalities as equations (lines) and draw them.
2. Find the Feasible Region (FR): This is the area on the graph where all constraints are satisfied. Shade the region that is NOT allowed (the rejected region) to leave the FR clear.
3. Locate the Optimal Point: The best answer will always be at a corner (vertex) of the Feasible Region.

How to find the "Best" corner?

There are two ways:
- The Vertex Method: Calculate the coordinates of every corner of the FR and plug them into your Objective Function. The one with the highest (or lowest) value is your winner!
- The Objective Line Method (Ruler Method): Draw a line representing your objective function for a random value (e.g., \(5x + 3y = 15\)). Place your ruler on this line and slide it across the graph (keeping the same gradient) until it hits the last possible point in the Feasible Region. That point is your optimum.

Common Mistake: Forgetting that some problems require Integer Solutions. If the optimal point is \(x = 2.7, y = 3.2\), but you can't sell 0.7 of a car, you must check the whole-number coordinates near that point to find the best integer result within the FR.

Key Takeaway: The optimum solution is always on the boundary of the feasible region, usually at a corner!

3. The Simplex Algorithm

Graphs are great for 2D, but what if you have 4 variables? You'd need a 4D graph! Since that's impossible to draw, we use the Simplex Algorithm—a step-by-step table method (tableau).

The Initial Tableau

For a Maximisation problem with only \(\le\) constraints, we set up a table. All variables (\(x, y, s_1, s_2\)) go across the top. The bottom row is the Objective Row.
Trick: When putting the objective function \(P = 14x + 12y\) into the table, rewrite it as \(P - 14x - 12y = 0\). This is why the values in the bottom row usually start as negative!

The Iteration Process

1. Choose the Pivot Column: Look at the objective row. Pick the most negative value. This is the variable that will increase your profit the fastest.
2. The Ratio Test (Find the Pivot Row): For each row, divide the "Value" column by the number in your pivot column. Ignore zero or negative results. The row with the smallest positive ratio is your pivot row.
3. Pivot: Use row operations to turn the pivot element (where the row and column meet) into 1, and all other numbers in that column into 0.
4. Repeat: Keep going until there are no negative values in the objective row. If there are no negatives, you've found the maximum!

Did you know? The Simplex algorithm was developed during WWII and is still used today by airlines to schedule flights and crews!

4. Advanced Methods: Two-Stage and Big-M

Standard Simplex only works if the origin (\(x=0, y=0\)) is a valid starting point. If we have \(\ge\) constraints, the origin is outside the Feasible Region. We need a "boost" to get into the region. This is where Two-Stage and Big-M come in.

The Big-M Method

We add a huge penalty, \(M\), to the objective function for any artificial variable (\(t\)). Because \(M\) is so large, the algorithm will work incredibly hard to make \(t = 0\) as quickly as possible to get rid of that penalty. Once all artificial variables are zero, you are "inside" the feasible region and the rest of the Simplex proceeds as normal.

The Two-Stage Method

As the name suggests, we do this in two phases:
- Stage 1: We create a new, temporary objective function: Minimise the sum of all artificial variables. Our goal is to make this sum 0.
- Stage 2: Once the sum is 0 (meaning all artificial variables are gone), we throw away the Stage 1 objective and the artificial columns. We then use the resulting table to solve the original objective function.

Memory Aid:
- Big-M: Uses a "Penalty" (\(M\)) to scare the artificial variables away.
- Two-Stage: Uses a "Mini-Game" (Stage 1) to delete the artificial variables before playing the "Main Game" (Stage 2).

Summary Checklist

- Formulation: Did I include non-negativity (\(x, y \ge 0\))?
- Slack/Surplus: Did I use \(+s\) for \(\le\) and \(-s + t\) for \(\ge\)?
- Graphical: Did I shade the unwanted region? Did I check for integer requirements?
- Simplex: Is my pivot column the most negative in the objective row? Is my pivot row the smallest positive ratio?
- Stopping Condition: For maximisation, I stop when there are no more negatives in the objective row.

Don't worry if the row operations in Simplex feel tedious! It's just bookkeeping. Take your time, double-check your subtractions, and you'll find the optimal path!