Welcome to Further Matrix Algebra!
In your earlier studies, you learned how to add, multiply, and find the inverse of matrices. Think of that as learning how to drive a car. In this chapter, we are going to open the hood and look at the engine! We will explore Eigenvalues and Eigenvectors, which are like the "DNA" of a matrix, telling us how it behaves during transformations. Don't worry if it sounds a bit technical at first—we'll break it down step-by-step.
1. Eigenvalues and Eigenvectors
Imagine you have a 2D transformation matrix \(A\). Usually, when you multiply a vector by \(A\), the vector changes both its length and its direction. However, for most matrices, there are special "magic" directions where the vector only changes its length.
Key Terms:
- Eigenvector (\(v\)): A non-zero vector that stays in the same line (direction) after the transformation.
- Eigenvalue (\(\lambda\)): The scale factor by which that eigenvector is stretched or squashed.
The fundamental equation is: \(Av = \lambda v\)
How to find Eigenvalues (The Characteristic Equation)
To find the eigenvalues (\(\lambda\)) for a \(2 \times 2\) matrix \(A\), we use the characteristic equation:
\(\det(A - \lambda I) = 0\)
Where \(I\) is the identity matrix \( \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \).
Step-by-Step Process:
- Subtract \(\lambda\) from the numbers on the main diagonal (top-left to bottom-right).
- Find the determinant of this new matrix.
- Set that determinant to zero and solve the resulting quadratic equation for \(\lambda\).
Example: If \(A = \begin{pmatrix} 1 & 1 \\ 4 & 1 \end{pmatrix}\), the characteristic equation is:
\(\det \begin{pmatrix} 1-\lambda & 1 \\ 4 & 1-\lambda \end{pmatrix} = 0\)
\((1-\lambda)^2 - 4 = 0\)
Solving this gives \(\lambda = 3\) and \(\lambda = -1\).
How to find Eigenvectors
Once you have an eigenvalue (\(\lambda\)), plug it back into the equation \((A - \lambda I)v = 0\). You are looking for a vector \(v = \begin{pmatrix} x \\ y \end{pmatrix}\) that works.
Quick Tip: You will usually get two equations that look different but are actually multiples of each other. Just pick a simple value for \(x\) (like 1) and find the corresponding \(y\).
Normalised Vectors
Sometimes the exam asks for a normalised eigenvector. This just means a vector with a length (magnitude) of 1.
To normalise a vector, divide it by its magnitude: \(\hat{v} = \frac{v}{|v|}\).
Quick Review: Important Scenarios
- Repeated Eigenvalues: Sometimes the quadratic equation gives the same value twice (e.g., \(\lambda = 2, 2\)).
- Complex Eigenvalues: If the quadratic has no real roots, the eigenvalues will be complex numbers (e.g., \(2 \pm 3i\)). This usually represents a rotation.
Key Takeaway: Eigenvalues tell us the "scale factors" of a matrix, and eigenvectors tell us the "directions" that don't rotate.
2. Diagonalization (Reduction to Diagonal Form)
Multiplying a matrix by itself 100 times (\(A^{100}\)) is a nightmare. However, multiplying a diagonal matrix (where only the diagonal has numbers) is super easy—you just raise the diagonal numbers to that power!
The Goal: We want to find a way to write \(A\) in terms of a diagonal matrix \(D\).
The Formula: \(P^{-1}AP = D\)
- \(D\) (The Diagonal Matrix): This contains the eigenvalues on the main diagonal.
- \(P\) (The Modal Matrix): This contains the eigenvectors as its columns. Make sure the order of columns in \(P\) matches the order of eigenvalues in \(D\)!
Analogy: Think of \(P\) as a translator. It translates our "messy" matrix \(A\) into a "simple" language (\(D\)) where we can do calculations easily, and then \(P^{-1}\) translates it back.
Symmetric Matrices and Orthogonal Diagonalization
If a matrix is symmetric (it looks the same if you reflect it across the diagonal), something special happens: its eigenvectors are always perpendicular (orthogonal) to each other.
In this case, if you use normalised eigenvectors to build \(P\), then \(P\) becomes an orthogonal matrix. A huge shortcut here is that \(P^{-1} = P^T\) (the inverse is just the transpose!).
Common Mistake to Avoid:
When building matrix \(P\), students often mix up the columns. If your first eigenvalue in \(D\) is \(\lambda_1\), the first column of \(P\) must be the eigenvector associated with \(\lambda_1\).
Key Takeaway: Diagonalization simplifies a matrix so we can easily calculate high powers or understand its structure.
3. The Cayley-Hamilton Theorem
This theorem sounds fancy, but it has a very simple and cool meaning: "A matrix satisfies its own characteristic equation."
If your characteristic equation for matrix \(A\) is \(\lambda^2 - 5\lambda + 6 = 0\), then the theorem says:
\(A^2 - 5A + 6I = 0\)
Note: The constant term \(6\) becomes \(6I\) because you can't add a single number to a matrix!
What is it used for?
- Finding the Inverse (\(A^{-1}\)): Multiply every term in the equation by \(A^{-1}\).
\(A - 5I + 6A^{-1} = 0\)
Then rearrange to find \(A^{-1} = \frac{1}{6}(5I - A)\). This is often faster than the standard method! - Finding high powers of \(A\): You can rearrange the equation to find \(A^2 = 5A - 6I\). To find \(A^3\), multiply both sides by \(A\) and substitute the \(A^2\) term back in.
Did you know? This theorem works for matrices of any size, but for your AS Level, you will focus on \(2 \times 2\) matrices.
Key Takeaway: The Cayley-Hamilton theorem turns matrix algebra into "normal" algebra, making it easy to find inverses and powers.
Summary Checklist
Before you tackle the exam questions, make sure you are comfortable with:
- Finding eigenvalues by solving \(\det(A - \lambda I) = 0\).
- Finding eigenvectors by solving \((A - \lambda I)v = 0\).
- Normalising a vector so its magnitude is 1.
- Setting up \(P\) and \(D\) for diagonalization.
- Using \(P^T\) instead of \(P^{-1}\) for symmetric matrices.
- Replacing \(\lambda\) with \(A\) in the characteristic equation to use Cayley-Hamilton.
Don't worry if this seems tricky at first—eigenvalues are a big jump from GCSE or standard A Level Maths, but with a bit of practice on the characteristic equation, it will become second nature!