Welcome to Matrices and Linear Spaces!

Hello there! In this chapter of Further Mathematics, we are going to dive into the world of Matrices and Linear Spaces. Think of matrices not just as boxes of numbers, but as powerful tools that can transform shapes, solve complex systems of equations, and even describe the rules of multi-dimensional spaces. Whether you are aiming for an A* or just trying to wrap your head around the basics, these notes are designed to guide you step-by-step. Let’s get started!

1. Operations on \(3 \times 3\) Matrices

You’ve seen \(2 \times 2\) matrices before. Now, we are stepping up to \(3 \times 3\). The rules are very similar, just with a few more numbers to keep track of!

Addition and Subtraction

Just like adding or subtracting regular numbers, you simply add or subtract the corresponding elements in the same position. Remember: Matrices must be the same size to add or subtract them.

Matrix Multiplication

To multiply two matrices \(A\) and \(B\), we use the "Row by Column" rule. You multiply the elements of the row of the first matrix by the elements of the column of the second matrix and add them together.

Quick Trick: Think of the "L" shape. Your left hand moves across the row, and your right hand moves down the column.

Important Point: In matrix world, order matters! Generally, \(AB \neq BA\). Swapping the order usually gives a different result.

The Determinant of a \(3 \times 3\) Matrix

The determinant, denoted as \(det(A)\) or \(|A|\), is a single number that tells us a lot about a matrix. For a matrix:
\(A = \begin{pmatrix} a & b & c \\ d & e & f \\ g & h & i \end{pmatrix}\)
The determinant is calculated as:
\(det(A) = a(ei - fh) - b(di - fg) + c(dh - eg)\)

Memory Aid: Use the signs +, -, + for the top row elements when expanding.

The Inverse Matrix

The inverse \(A^{-1}\) is the matrix that "undoes" what \(A\) does. When you multiply a matrix by its inverse, you get the Identity Matrix (\(I\)).
\(A \times A^{-1} = I\)
Note: If \(det(A) = 0\), the matrix is singular and has no inverse!

Quick Review:
• Multiply Row by Column.
• \(AB\) is not the same as \(BA\).
• If \(det(A) = 0\), you can't find an inverse.

2. Solving Systems of Linear Equations

Matrices are perfect for solving sets of equations like \(2x + 3y - z = 10\). We can represent these as \(Ax = b\).

Row Reduction and Echelon Form

To solve these, we use Gaussian Elimination. The goal is to perform "Row Operations" to turn our matrix into Row Echelon Form (REF), which looks like a staircase of zeros at the bottom-left.

The Three Legal Moves:
1. Swap two rows.
2. Multiply a row by a non-zero number.
3. Add or subtract a multiple of one row from another.

Geometrical Interpretation

When you solve a system of 3 equations with 3 variables, you are looking for where three planes meet in 3D space:
Unique Solution: The three planes meet at a single point.
Infinitely Many Solutions: The planes meet along a line or are all the same plane.
No Solution: The planes never all meet at the same place (e.g., they form a triangle prism shape or are parallel).

3. Linear Transformations

A matrix can act like a "machine" that takes a vector and spits out a new one. This is called a Linear Transformation.

If we have a transformation \(T: \mathbb{R}^n \to \mathbb{R}^m\), it means we take a vector with \(n\) components and transform it into one with \(m\) components. In your syllabus, \(n\) and \(m\) are at most 3.

Example: A \(2 \times 2\) matrix can rotate a point on a 2D graph. A \(3 \times 3\) matrix can rotate a point in 3D space.

4. Eigenvalues and Eigenvectors

Don't let the fancy names scare you! "Eigen" is German for "own" or "characteristic."

An Eigenvector is a special vector that, when multiplied by a matrix \(A\), doesn't change direction. It only gets stretched or squashed by a factor called the Eigenvalue (\(\lambda\)).

The equation is: \(Av = \lambda v\)

How to find them:

1. Solve \(det(A - \lambda I) = 0\) to find the eigenvalues (\(\lambda\)).
2. For each \(\lambda\), solve \((A - \lambda I)v = 0\) to find the corresponding eigenvectors (\(v\)).
Note: For this syllabus, we only focus on cases where eigenvalues are real numbers.

Diagonalisation

If a square matrix \(M\) has enough eigenvectors, we can write it in a very simple form:
\(M = QDQ^{-1}\)
• \(D\) is a Diagonal Matrix (zeros everywhere except the diagonal), containing the eigenvalues.
• \(Q\) is a matrix where the columns are the corresponding eigenvectors.

Why do we do this? It makes finding powers of matrices super easy! \(M^n = QD^nQ^{-1}\). Calculating \(D^n\) is easy because you just raise the diagonal numbers to the power of \(n\).

Key Takeaway: Eigenvectors are the "natural axes" of a matrix. Diagonalisation uses these axes to make complex calculations simple.

5. Linear Spaces and Subspaces

A Linear Space (or Vector Space) is just a collection of objects (vectors) that follow specific rules (axioms), such as: you can add them together, and you can multiply them by a number, and the result is still in the space.

Subspaces

A Subspace is a smaller space inside a bigger linear space that still follows all the rules.
Analogy: If "all of 3D space" is the Linear Space, then a "flat plane passing through the origin" is a Subspace.

Linear Independence and Span

Span: The "Span" of a set of vectors is every possible place you can reach by adding and scaling those vectors. It's like the "territory" they cover.
Linear Independence: A set of vectors is linearly independent if none of them can be made by combining the others. They are all "original" and provide new directions.

Basis and Dimension

A Basis is the "Goldilocks" set of vectors: they span the whole space, and they are linearly independent. It's the smallest set of "building blocks" needed to create the space.
The Dimension is simply the number of vectors in the basis.

6. Special Matrix Spaces and Rank

We can look at a matrix and find four important spaces:

1. Row Space: The space spanned by the rows.
2. Column Space (or Range Space): The space spanned by the columns. This represents all possible outputs of the transformation \(Ax\).
3. Null Space (or Kernel): The set of all vectors \(x\) that get "squashed" to zero, i.e., \(Ax = 0\).

Rank and Nullity

Rank: The dimension of the Column Space (how many independent columns there are).
Nullity: The dimension of the Null Space.

The Rank-Nullity Theorem:
For a square matrix of order \(n\):
Rank + Nullity = \(n\)

Did you know? The Rank of a matrix is the same whether you look at the rows or the columns! It’s a fundamental property of the matrix.

Common Mistake to Avoid: Students often forget that the Null Space always includes the zero vector, but the dimension of the null space (the nullity) only counts the number of free variables or independent directions that go to zero.

Final Encouragement: Linear algebra is like learning a new language. At first, the grammar (axioms and definitions) feels strange, but once you start "speaking" it, you'll see it everywhere in physics, engineering, and computer science. Keep practicing those row reductions—you've got this!