Invertibility is one of the first places where linear algebra becomes more than a procedure for solving one system. A square matrix is invertible exactly when it can be undone by another square matrix, and that idea turns out to be equivalent to many other statements: row reduction to the identity, consistency of every system , linear independence of the columns, and the ability to write every vector as a linear combination of those columns.
This note develops those equivalences carefully. The goal is not only to know what the symbol means, but also to recognize when it exists and how to use it without guessing.
Left and right inverses
Before the square case, it is useful to separate two one-sided notions.
Definition
Left inverse and right inverse
Let be a matrix.
- A matrix is a left inverse of if .
- A matrix is a right inverse of if .
These definitions matter because matrix multiplication is not commutative. For rectangular matrices, a left inverse and a right inverse need not both exist. The square case is special.
Definition
Invertible matrix
Let be a square matrix. We say that is invertible if there exists a matrix such that
The matrix is called the inverse of , and we write .
Theorem
The inverse is unique
If is a left inverse of and is a right inverse of , then . So an invertible matrix has exactly one inverse.
Proof
Why the inverse is unique
What invertibility means
Invertibility is a reversibility statement. Applying changes a vector, but if is invertible then undoes that change exactly.
That is why the identity matrix appears in the definition. The identity matrix does nothing:
for every compatible vector x. An inverse is precisely a matrix that brings
you back to that unchanged state.
Worked example
A diagonal matrix is easy to invert
Let
Then
This works because each diagonal entry is replaced by its reciprocal, and the off-diagonal zeros stay zero. Multiplying by gives .
Row reduction and the inverse
The most practical way to test invertibility is to row-reduce. The key point has two parts:
- row-operation matrices are invertible, with inverse given by the reverse row operation;
- a square matrix is invertible exactly when it can be row-reduced to .
Theorem
Row-operation matrices are invertible
If is a row operation on matrices with p rows, and is
the reverse row operation, then the corresponding row-operation matrices
and satisfy
This gives a clean interpretation of row reduction: every row operation is actually multiplication on the left by an invertible matrix.
Theorem
Invertibility and row reduction
For a square matrix , the following are equivalent:
- is invertible.
- is row-equivalent to .
- is a product of row-operation matrices.
- is nonsingular.
The practical consequence is very concrete: if row operations transform to , then those same operations, applied to , transform it to .
Read and try
Follow one inverse-by-row-reduction example
The live demo lets you step through [A | I] until the left block becomes I.
Start from [A | I]. If A is invertible, row reduction will turn the left block into I.
| 1 | 2 | 1 | 1 | 0 | 0 |
| 0 | 1 | 1 | 0 | 1 | 0 |
| 2 | 3 | 4 | 0 | 0 | 1 |
The live demo above is the shortest way to see the logic. It is not the definition. It is the computational method that matches the definition.
Read and try
Trace one full row-reduction path
The live stepper walks through one complete elimination path, showing the row operation, the pivot you are focusing on, and the matrix produced at each step.
| 1 | 2 | 2 | 4 |
| 1 | 3 | 3 | 5 |
| 2 | 6 | 5 | 6 |
Row operation
Choose the first pivot in column 1.
What to notice
Column 1 already has a convenient pivot 1 in the first row, so we do not need a row swap.
Start with the augmented matrix. The first pivot should help us clear the entries underneath it.
The second widget shows the shape of a full elimination path. In an invertible case, the left block eventually becomes , and that is the moment when the right block becomes the inverse.
Equivalent formulations
Invertibility is useful because it has a dictionary of equivalent conditions. This is the main bridge between algebra, row reduction, and systems of linear equations.
Theorem
Equivalent ways to recognize invertibility
Let be a matrix. Then the following statements are equivalent:
- is invertible.
- is row-equivalent to .
- is a product of row-operation matrices.
- has a left inverse.
- has a right inverse.
- is nonsingular.
- For every column vector
bwithpentries, the system is consistent. - For every column vector
bwithpentries, the system has the unique solution .
Two of these statements are especially important in practice.
- Statement 7 says that the columns of span .
- Statement 8 says that invertibility gives you a complete solution formula, not just existence.
That is why invertibility is the exact algebraic condition behind solving a linear system by a matrix inverse.
Row-equivalence through invertible matrices
We can push the row-operation viewpoint one step further. Instead of thinking about row-equivalence as a long list of elementary moves, package the whole list into one invertible matrix on the left.
Theorem
Row-equivalence is left multiplication by an invertible matrix
Suppose and are matrices with p rows. Then the following are
equivalent:
- and are row-equivalent.
- There exists an invertible matrix such that
Moreover, once , we also have
This theorem is not a new computational trick. It is a cleaner language for the same phenomenon. A sequence of row operations can always be compressed into one invertible matrix , and the reverse row operations are encoded by .
Worked example
Reading a row-equivalence as one matrix equality
Let
The matrix is the row-operation matrix for the move
So
If we call this new matrix , then . That single equation records the entire row operation. Since is invertible, and are row-equivalent.
The gain is conceptual. Once you know that row-equivalence means multiplication by an invertible matrix on the left, you can explain many invariants in one line instead of by repeating row-operation arguments.
Theorem
Row operations preserve linear relations among corresponding columns
Let and be row-equivalent matrices, and write their columns as
If
then
In particular, linear dependence and linear independence among corresponding columns are preserved by row-equivalence.
The proof is short once is known. Multiply the relation for the columns of by . Because matrix multiplication is linear,
which is exactly the corresponding relation amongst the columns of .
This is the bridge from row reduction to column language. Row operations change the actual columns, but they do not change which columns are redundant or which column relations are forced by the others.
Why the reduced row-echelon form is unique
The reduced row-echelon form in a row-equivalence class is unique. That fact is easy to overlook, but it is what makes later definitions mathematically legitimate.
Theorem
A row-equivalence class has exactly one reduced row-echelon form
Suppose is a matrix, and suppose and are both reduced row-echelon forms. If is row-equivalent to and is row-equivalent to , then
A standard proof uses induction on the rank. The basic strategy is:
- compare the pivot columns from left to right,
- use preserved linear relations to force the same pivot positions, and then
- show that every free column must have the same coefficients in terms of the pivot columns.
So reduced row-echelon form is not merely a convenient final answer. It is the final answer inside a row-equivalence class.
Definition
Rank
The rank of a matrix is the number of pivots in its reduced row-echelon form.
This definition works only because the reduced row-echelon form is unique. If different reduction paths could produce different reduced forms with different numbers of pivots, then rank would depend on the calculation. The uniqueness theorem rules that out.
Column independence and linear combinations
The same source also recasts invertibility in terms of columns.
Theorem
Invertibility and the columns of a square matrix
For a matrix , the following are equivalent:
- is invertible.
- The columns of are linearly independent.
- Every column vector in is a linear combination of the columns of .
These are not separate facts. They are three ways of reading the same structural statement.
If the columns are linearly independent, then no column is redundant. If they span , then every target vector can be built from them. For a square matrix, those two conditions coincide exactly when the matrix is invertible.
Why the transpose also matters
Invertibility behaves well under transpose.
Theorem
Transpose and powers
If is invertible, then:
- is invertible, and .
- is invertible for every integer
n, and .
The transpose result is useful when you want to turn a statement about columns into a statement about rows. The power rule is useful when a repeated transformation appears in a calculation.
Worked example
Worked example
Find an inverse by row reduction
Let
Start from :
Eliminate the entry below the first pivot:
Now scale the second row and clear the entry above the second pivot:
So
The computation is not the point by itself. The point is that the right block of the augmented matrix records the inverse because the left block has been driven to the identity.
Common mistakes
Common mistake
Do not confuse one-sided inverses in the rectangular case
For a non-square matrix, having a left inverse does not automatically mean it has a right inverse. The square case is special: once an inverse exists, it is both a left inverse and a right inverse, and it is unique.
Common mistake
Do not guess invertibility from appearance
A matrix can look simple and still fail to be invertible. The correct test is to row-reduce it, or to use one of the equivalent conditions above.
Quick checks
Quick check
If is invertible, what is ?
Use the defining property of an inverse.
Solution
Answer
Quick check
If is invertible, can the homogeneous system have a nonzero solution?
Use the unique-solution statement.
Solution
Answer
Quick check
If is invertible, is invertible?
Use the transpose rule above.
Solution
Answer
Quick check
If with invertible and the columns of satisfy , what relation must hold among the columns of ?
Keep the same coefficients and use .
Solution
Answer
Quick check
Why does uniqueness of RREF matter when defining rank?
Answer in one sentence using the phrase “well defined.”
Solution
Answer
Exercise
Quick check
Suppose is invertible and . Prove that .
Use the fact that the inverse of is unique.
Solution
Guided solution
Read this first
This page depends especially on 2.3 Gaussian elimination and RREF, 3.1 Matrix multiplication and identity matrices, and 3.2 Transpose and special matrices.