The previous note studies determinants from the row-operation viewpoint. This note turns the picture sideways.
Transpose lets us move information between rows and columns. Once that symmetry is in place, column expansion and column operations become just as legitimate as their row versions. The chapter then closes with two classical formulas: adjoints and Cramer's rule.
Transpose does not change determinant
Theorem
Determinant of a transpose
For every square matrix ,
This theorem is conceptually important. A determinant is defined by expanding along rows, but it does not secretly prefer rows over columns. Transpose swaps the two viewpoints without changing the final scalar.
Theorem
Cofactor expansion along any column
For any fixed column j of a square matrix ,
Equivalently,
So you may expand along a row or along a column. The right choice is whichever one produces the cleanest minors.
Worked example
A column expansion with many zeros
Let
The third column has one zero, so expand along that column:
That gives
The same answer would come from row expansion, but the chosen column makes the arithmetic shorter.
Column operations obey the same pattern
Because transpose preserves determinant, every row-operation rule has a column version.
Theorem
How column operations change determinant
Let be obtained from a square matrix by one elementary column operation.
- Swapping two columns multiplies the determinant by .
- Multiplying one column by multiplies the determinant by .
- Replacing one column with itself plus a multiple of another column leaves the determinant unchanged.
Worked example
Use column operations to create zeros
Consider
Apply the column operations and :
These are column-addition operations, so the determinant is unchanged. Now expand along the third column:
The point is not that column operations are always superior. The point is that you may use whichever direction creates more zeros with less bookkeeping.
Adjoint matrices package all cofactors at once
The cofactor of one entry helps in one expansion. The adjoint matrix collects all cofactors into one object.
Definition
Adjoint matrix
For an matrix , first form the cofactor matrix
The adjoint matrix is the transpose of that cofactor matrix:
Theorem
Adjoint identity and inverse formula
For every square matrix ,
If , then is invertible and
This formula is conceptually clean, but in numerical work it is usually less efficient than row reduction. Its value is that it exposes the algebraic structure of inverse matrices.
Worked example
Recover the 2×2 inverse formula from the adjoint
Let
The cofactor matrix is
so
Therefore
The familiar inverse formula is really the adjoint identity written in the case.
Cramer's rule solves one coordinate at a time
Theorem
Cramer's rule
Let be an invertible matrix and let . For
each j, let be the matrix obtained by replacing the jth column of
with b.
If x is the unique solution of
then
Cramer's rule is beautiful because each coordinate is isolated by one determinant ratio. It is not the fastest method for large systems, but it is a clean theoretical formula for square invertible systems.
Worked example
Solve a 2×2 system with Cramer's rule
Solve
Write
First compute
Replace the first column by b:
Replace the second column by b:
So
Common mistake
Common mistake
Cramer's rule is not a universal system solver
Cramer's rule requires a square coefficient matrix and a nonzero determinant. If the system is rectangular or singular, the rule is not available. Even when it applies, it is usually less efficient than Gaussian elimination for large systems.
Quick check
Quick check
Does transpose change the determinant of a square matrix?
State the theorem directly.
Solution
Answer
Quick check
What column operation leaves the determinant unchanged?
Think of the column analogue of a type-III row operation.
Solution
Answer
Quick check
When may Cramer's rule be used?
Identify the structural hypotheses on A.
Solution
Answer
Exercises
Quick check
Use a column expansion to compute .
Choose the column with the most zeros.
Solution
Guided solution
Quick check
For , write down .
Compute the four cofactors first, then transpose the cofactor matrix.
Solution
Guided solution
Quick check
Use Cramer's rule to find x_1 for the system , .
You only need and .
Solution
Guided solution
Related notes
Return to 7.2 Row operations, products, and invertibility if the row-operation bookkeeping is still shaky.
Keep 3.2 Transpose and special matrices nearby, because the transpose theorem here extends that earlier structural chapter.
For system language, connect this note back to 1.1 Equations and solution sets and 2.3 Gaussian elimination and RREF.