Introduction to Further Matrix Algebra
Welcome to one of the most powerful chapters in Further Mathematics! So far, you have learned how matrices can transform shapes and solve equations. In this chapter, we are going to look "under the hood" of a matrix to find its Eigenvalues and Eigenvectors. Think of these as the "DNA" of a matrix—they tell us the fundamental ways a matrix scales and stretches space.This topic is vital for everything from Google's search algorithms to how engineers study vibrations in bridges. Don't worry if it feels abstract at first; we will break it down step-by-step!
1. Eigenvalues and Eigenvectors
Imagine you apply a matrix transformation to a bunch of vectors. Most vectors will be knocked off their original path and rotated. However, for every matrix, there are special "stubborn" vectors that stay on their original line—they only get longer or shorter. These are our Eigenvectors, and the amount they stretch by is the Eigenvalue (\(\lambda\)).Prerequisite Check: The Identity Matrix
Before we start, remember the Identity Matrix (\(I\)). For a \(2 \times 2\) matrix, \(I = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}\). Multiplying a matrix by \(I\) is like multiplying a number by 1—it doesn't change anything. We use \(\lambda I\) to turn the scalar value \(\lambda\) into a matrix so we can subtract it from matrix \(A\).The Characteristic Equation
To find the eigenvalues of a matrix \(A\), we solve the Characteristic Equation:\( \det(A - \lambda I) = 0 \)
Step-by-Step Process:
1. Subtract \(\lambda\) from the main diagonal elements of matrix \(A\).
2. Find the determinant of this new matrix.
3. Set the resulting polynomial to zero and solve for \(\lambda\).
Example: For a \(2 \times 2\) matrix, this will usually result in a quadratic equation. For a \(3 \times 3\) matrix, you will get a cubic equation.
Finding Eigenvectors
Once you have your eigenvalues (\(\lambda\)), plug each one back into the equation:\( (A - \lambda I)\mathbf{v} = 0 \)
Here, \(\mathbf{v}\) is the eigenvector \(\begin{pmatrix} x \\ y \end{pmatrix}\). You will get a set of simultaneous equations. Because eigenvectors represent a direction, there isn't just one answer—any multiple of your vector is also an eigenvector!
Quick Review Box:
• Eigenvalue (\(\lambda\)): The scale factor of the stretch.
• Eigenvector (\(\mathbf{v}\)): The direction that stays the same.
• Normalised Vector: An eigenvector with a length of 1. To normalise \(\begin{pmatrix} 3 \\ 4 \end{pmatrix}\), divide by its magnitude \(\sqrt{3^2 + 4^2} = 5\) to get \(\begin{pmatrix} 0.6 \\ 0.8 \end{pmatrix}\).
Complex and Repeated Eigenvalues
Sometimes, your quadratic or cubic equation might have Repeated Roots (e.g., \(\lambda = 2, 2, 5\)) or Complex Roots (e.g., \(\lambda = 3 \pm 2i\)).• Repeated Eigenvalues: These can sometimes lead to fewer eigenvectors than you'd expect, but in this syllabus, you will mostly deal with cases that still allow for diagonalisation.
• Complex Eigenvalues: These usually represent a transformation involving a rotation. If the matrix has real numbers, complex eigenvalues always come in conjugate pairs.
Key Takeaway: Eigenvalues are found by solving \(\det(A - \lambda I) = 0\), and eigenvectors are the directions associated with those scales.
2. Diagonalisation
Matrices can be messy. Diagonalisation is the process of finding a way to represent a matrix in its simplest possible form—a Diagonal Matrix (\(D\)), where all numbers are zero except for those on the main diagonal.The Formula
If a matrix \(A\) has enough eigenvectors to form a basis, we can write:\( A = PDP^{-1} \) or \( D = P^{-1}AP \)
Where:
• \(D\): The diagonal matrix containing the Eigenvalues on the main diagonal.
• \(P\): The Modal Matrix, where each column is an Eigenvector corresponding to the eigenvalues in \(D\).
Did you know? Diagonalising a matrix makes it incredibly easy to find high powers. For example, \(A^{10} = PD^{10}P^{-1}\). Since \(D\) is diagonal, you just raise the individual numbers on the diagonal to the power of 10! This is much faster than multiplying \(A\) by itself ten times.
Symmetric Matrices and Orthogonal Diagonalisation
A Symmetric Matrix is one where \(A = A^T\) (it looks like a mirror image across the main diagonal). These are special because:1. All their eigenvalues are always real.
2. Their eigenvectors are always perpendicular (orthogonal) to each other.
When we normalise these perpendicular eigenvectors, the matrix \(P\) becomes an Orthogonal Matrix. For an orthogonal matrix, the inverse is just the transpose: \(P^{-1} = P^T\).
This simplifies our formula to: \( D = P^T A P \).
Common Mistake to Avoid: When building matrix \(P\), make sure the order of your eigenvectors matches the order of the eigenvalues in matrix \(D\). If \(\lambda_1\) is the first entry in \(D\), its corresponding eigenvector must be the first column in \(P\).
Key Takeaway: Diagonalisation simplifies a matrix using its eigenvalues and eigenvectors, making complex calculations like powers much easier.
3. The Cayley-Hamilton Theorem
The Cayley-Hamilton Theorem is a beautiful and surprising rule. It states that: Every square matrix satisfies its own characteristic equation.What does this mean?
If the characteristic equation of matrix \(A\) is \(\lambda^2 - 5\lambda + 6 = 0\), then the theorem tells us that:\( A^2 - 5A + 6I = \mathbf{0} \)
(Notice how the constant 6 becomes \(6I\) because we are working with matrices, and 0 becomes the zero matrix \(\mathbf{0}\)).
Why is this useful?
1. Finding the Inverse (\(A^{-1}\)):Instead of using the long determinant method, you can rearrange the Cayley-Hamilton equation.
Example: From \( A^2 - 5A + 6I = \mathbf{0} \), multiply everything by \(A^{-1}\):
\( A - 5I + 6A^{-1} = \mathbf{0} \)
\( 6A^{-1} = 5I - A \)
\( A^{-1} = \frac{1}{6}(5I - A) \)
2. Finding Higher Powers:
You can express \(A^3, A^4\), etc., as linear combinations of \(A\) and \(I\). This is much faster than manual multiplication.
Memory Aid: Cayley-Hamilton is like a "Matrix Shortcut." If you see a question asking for \(A^{-1}\) or \(A^n\) after you've already found the characteristic equation, this theorem is usually the intended path!
Key Takeaway: The Cayley-Hamilton theorem allows us to swap \(\lambda\) for \(A\) in the characteristic equation to solve for inverses and powers efficiently.
Chapter Summary
Congratulations! You've covered the core of Further Matrix Algebra. Here is your quick recap:• Use \( \det(A - \lambda I) = 0 \) to find Eigenvalues.
• Use \( (A - \lambda I)\mathbf{v} = 0 \) to find Eigenvectors.
• Diagonalise using \( D = P^{-1}AP \) to simplify matrix powers.
• For Symmetric matrices, use Orthogonal diagonalisation (\(P^{-1} = P^T\)).
• Use Cayley-Hamilton to turn characteristic equations into matrix identities for finding inverses.
Don't worry if the \(3 \times 3\) determinants take a while to calculate—practice makes perfect, and always double-check your signs!