Welcome to the World of Matrices!
In this chapter, we are going to explore Matrices. Think of a matrix as a "mathematical spreadsheet" — a way of organizing numbers into rows and columns to solve complex problems all at once. Matrices are the backbone of modern technology; they are used in everything from computer graphics in video games to the algorithms that rank search results on Google.
Don't worry if this seems a bit "alien" at first. We will build it up step-by-step, from basic arithmetic to transforming shapes in 3D space!
1. Matrix Arithmetic (The Basics)
A matrix is defined by its order (size), written as rows \(\times\) columns. For example, a \(2 \times 3\) matrix has 2 rows and 3 columns.
Addition, Subtraction, and Scaling
To add or subtract matrices, they must be conformable—which just means they must be the exact same size. You simply add or subtract the numbers in the same positions.
Scalar Multiplication is even easier: you just multiply every single number inside the matrix by a single number (the scalar) on the outside.
Matrix Multiplication
Multiplying two matrices is a bit different. You don't just multiply the numbers in the same spots! Instead, you multiply Rows by Columns.
The Rule of Conformity: To multiply matrix \(A\) and \(B\), the number of columns in \(A\) must match the number of rows in \(B\).
Memory Aid: If you write the sizes side-by-side, the "inner" numbers must match. For example, a \((2 \times \mathbf{3})\) can multiply a \((\mathbf{3} \times 4)\).
Special Matrices
- Zero Matrix (\(\mathbf{0}\)): Every entry is 0. Adding this to a matrix changes nothing.
- Identity Matrix (\(\mathbf{I}\)): The matrix equivalent of the number 1. It is a square matrix with 1s on the leading diagonal (top-left to bottom-right) and 0s everywhere else. Multiplying any matrix by \(\mathbf{I}\) leaves it unchanged: \(AI = A\).
Quick Review: Remember that in matrix multiplication, order matters! Usually, \(AB \neq BA\).
2. Transformations: Moving Shapes with Math
We can use matrices to move points and shapes on a graph. This is called a Linear Transformation.
2D Transformations
A \(2 \times 2\) matrix can represent reflections, rotations, and enlargements. We find where the "unit vectors" \(\begin{pmatrix} 1 \\ 0 \end{pmatrix}\) and \(\begin{pmatrix} 0 \\ 1 \end{pmatrix}\) end up, and those become the columns of our transformation matrix.
Successive Transformations
If you want to reflect a shape and then rotate it, you multiply the matrices together.
Important: You apply transformations from right to left. If \(R\) is a rotation and \(S\) is a stretch, doing the rotation then the stretch is written as \(SR\).
3.D Transformations
The syllabus limits 3D transformations to:
- Reflections in the planes \(x=0\), \(y=0\), or \(z=0\).
- Rotations about the \(x\), \(y\), or \(z\) axes.
Invariant Points and Lines
- An Invariant Point is a point that doesn't move after the transformation. The origin \((0,0)\) is always invariant in linear transformations.
- An Invariant Line is a line where every point on the line stays on that same line (though the points themselves might slide along it).
Key Takeaway: Transformations are just a way of "re-mapping" space using matrix multiplication.
3. Determinants: The Scale Factor
The determinant of a matrix, written as \(\det A\) or \(|A|\), is a single number that tells us a lot about the matrix.
Calculating Determinants
- For a \(2 \times 2\) matrix \(\begin{pmatrix} a & b \\ c & d \end{pmatrix}\), the determinant is \(ad - bc\).
- For a \(3 \times 3\) matrix, we use a process called "expanding by a row or column." It involves multiplying elements by the determinants of smaller \(2 \times 2\) matrices (minors).
What does it mean?
The determinant is the area scale factor (for 2D) or volume scale factor (for 3D) of the transformation.
Did you know? If the determinant is negative, it means the shape has been reflected or "flipped over" in addition to being scaled.
Singular Matrices
If \(\det A = 0\), the matrix is singular. This means the transformation has squashed the shape into a lower dimension (like squashing a 2D square into a 1D line). Singular matrices do not have an inverse.
4. Inverses: Going Backwards
The inverse of a matrix \(A\), written as \(A^{-1}\), is the matrix that "undoes" what \(A\) did.
\(AA^{-1} = I\).
Finding the Inverse
- For a \(2 \times 2\) matrix: Swap the elements on the leading diagonal, change the signs of the others, and divide the whole thing by the determinant.
- For a \(3 \times 3\) matrix: This is a longer process involving the matrix of cofactors, transposing (the adjugate), and dividing by the determinant.
Quick Review: You can't divide by matrices. Instead, we multiply by the inverse. To solve \(AX = B\), we use \(X = A^{-1}B\).
5. Solving Simultaneous Equations
Matrices allow us to solve three equations with three variables (\(x, y, z\)) efficiently.
We write the system as \(AX = B\), where \(A\) is the matrix of coefficients. If \(\det A \neq 0\), there is one unique solution found by \(X = A^{-1}B\).
When it fails (Geometric Interpretation)
If \(\det A = 0\), the equations are either inconsistent (no solution) or dependent (infinitely many solutions). Geometrically, this represents how three planes interact in 3D space:
- Unique Solution: The three planes meet at a single point.
- No Solution: The planes might form a triangular prism or be parallel.
- Infinite Solutions: The planes meet along a line (a "sheaf") or are all the same plane.
Common Mistake: Students often forget that if \(\det A = 0\), you must use substitution or Gaussian elimination to find out why it failed (is it a sheaf or a prism?).
6. Eigenvalues and Eigenvectors
This sounds intimidating, but the concept is beautiful. For a matrix \(A\), an eigenvector is a specific direction that does not change direction when the transformation is applied. It only gets stretched or squashed by a scale factor called the eigenvalue (\(\lambda\)).
The Characteristic Equation
To find the eigenvalues, we solve: \(\det(A - \lambda I) = 0\).
This gives us a polynomial equation in \(\lambda\). Once we have \(\lambda\), we plug it back into \((A - \lambda I)\mathbf{v} = 0\) to find the vector \(\mathbf{v}\).
Diagonalisation
If a matrix has enough eigenvectors, we can write it in a "diagonal" form:
\(\mathbf{M} = \mathbf{UDU}^{-1}\)
Where \(\mathbf{D}\) is a diagonal matrix of eigenvalues and \(\mathbf{U}\) is a matrix of eigenvectors.
Why do this? It makes calculating powers of matrices incredibly easy: \(\mathbf{M}^n = \mathbf{UD}^n\mathbf{U}^{-1}\). You only have to raise the numbers on the diagonal to the power of \(n\)!
Summary Checklist
- Arithmetic: Can you multiply rows by columns?
- Transformations: Do you remember that order is Right-to-Left?
- Determinants: Can you calculate a \(3 \times 3\) determinant and factorise it using row/column operations?
- Inverses: Do you know the steps for a \(3 \times 3\) inverse?
- Systems: Can you explain why three planes might not have a single meeting point?
- Eigen: Can you solve the characteristic equation to find \(\lambda\)?
Don't worry if this seems like a lot! Matrices are a skill that improves with practice. Keep drawing the transformations and calculating the determinants, and the patterns will start to make sense.