The previous notes explain what a basis is and how dimension is used. This note adds the structural theorems that make the theory reliable.
There are three questions behind the results:
- If a subspace is nonzero, must it have a basis?
- If we already have a basis, when can we replace some old basis vectors by new independent vectors?
- If two different bases describe the same subspace, how do their coordinate systems talk to each other?
The answers are not just formal background. They explain why dimension is well-defined, why independent lists can be extended, why spanning lists can be trimmed, and why diagonalization later is really a change of coordinates.
Why existence is not automatic
For familiar spaces such as , it is easy to write down the standard basis. For an arbitrary subspace , a basis is less obvious. The subspace might be given by equations, by a span, or by a condition such as .
The first theorem says that the situation is still controlled.
Theorem
Existence of a basis for subspaces of
Every nonzero subspace of has a basis. More precisely, there are vectors
such that , the list is linearly independent, and
The proof is a controlled selection process.
Start with any nonzero vector . If every vector in is already a multiple of , then is a basis. If not, choose that is not a multiple of . Then are linearly independent.
Continue in the same way. At step j, if the current list
does not span , choose a new vector
outside the current span. The new list stays linearly
independent because the new vector was chosen not to be a linear combination
of the old ones.
This process cannot go on forever: no more than n vectors in can be
linearly independent. Therefore it must stop, and when it stops the chosen
vectors span . They are independent by construction, so they form a basis.
Worked example
A basis produced by the selection idea
Let
Choose
This one vector does not span all of , because
is not a scalar multiple of . Therefore are linearly independent.
Now every vector in can be written as
or, using our chosen list,
because . Hence span and form a basis. This also gives .
The replacement idea
The basis-existence proof grows an independent list. The replacement theorem explains the complementary operation: insert new independent vectors into an old basis while deleting the correct old vectors.
Theorem
Replacement theorem
Let be a subspace of . Suppose
is a basis for , and suppose
are linearly independent. Then:
- ;
- the vectors , together with some of the old basis vectors , form another basis for .
This theorem is the precise version of the slogan:
independent vectors can replace the same number of old basis vectors.
The proof begins with the one-vector case. Suppose is a basis and
At least one coefficient is nonzero. If , then
So the list
spans the same space as the old basis. It is also independent. If the first nonzero coefficient is not , relabel the old basis vectors and do the same argument.
The full replacement theorem repeats this one-vector replacement step. Each new independent vector replaces one old basis vector, and independence prevents the process from getting stuck.
Dimension consequences
The replacement theorem gives the rigorous reason why dimension works.
Theorem
All bases of the same finite-dimensional subspace have the same size
If and are both bases for the same subspace , then and contain the same number of vectors.
Indeed, apply the replacement theorem twice. If has p vectors and
has q vectors, then the independence of inside a space with basis
gives . Reversing the roles gives . Therefore .
Several useful statements follow immediately.
Theorem
Counting rules inside a q-dimensional subspace
Let .
- Any linearly independent list in has at most
qvectors. - Any list of more than
qvectors in is linearly dependent. - A linearly independent list in can be extended to a basis of .
- A spanning list for can be reduced to a basis of by deleting redundant vectors.
These are not separate tricks. They all express the same fact: a basis is the exact size of a nonredundant spanning list.
Ordered bases and coordinate vectors
A basis as a set tells us which vectors are available. An ordered basis also fixes their order. Order matters for coordinates.
If
is an ordered basis for , then every has a unique expression
The coordinate vector of x relative to is
Changing the order of the basis changes the coordinate vector, even if the underlying basis vectors are the same.
Change-of-basis theorem
Now suppose the same p-dimensional subspace has two ordered bases:
Write the basis matrices
Theorem
Change-of-basis theorem
There is a unique invertible matrix such that
The columns of are the coordinate vectors of the 's in the ordered basis . If
then the coordinate vectors satisfy
Read this carefully. The matrix does not move the vector x in the ambient
space. It converts the coordinate column from the -basis language into the
-basis language:
Because is invertible, the reverse conversion is
If , then and are square invertible matrices, and the formula becomes especially concrete:
For a proper subspace, is usually not square, so you should find each column of by solving
Worked example: changing coordinates in a plane
Let
Let . One checks that both and are ordered bases for the same plane .
The vector equalities
combine into the matrix equality
Therefore
is the change-of-basis matrix from the ordered basis to the ordered basis .
For example, if
then
So
The actual vector has not changed. Only its coordinates have changed.
How to compute a change-of-basis matrix
Use this workflow.
- Decide the direction of conversion. If you want from , you need .
- For each , solve .
- Put the solution columns together:
If is square and invertible, this is just
If is not square, solve the systems directly. The solution exists and is unique because the 's form a basis for the same subspace.
Common mistakes
Common mistake
Using the inverse direction by accident
If , then sends -coordinates to -coordinates: . The inverse sends -coordinates back to -coordinates.
Common mistake
Forgetting that the bases are ordered
The ordered basis and the ordered basis give different coordinate vectors. A change-of-basis matrix compares ordered bases, not just unordered sets.
Common mistake
Trying to invert a non-square basis matrix for a proper subspace
If is a plane inside , a basis matrix has size , so it is not invertible as a square matrix. Solve column by column instead.
Quick checks
Quick check
If , can five vectors in be linearly independent?
Use the replacement-theorem counting rule.
Solution
Answer
Quick check
If , which coordinate vector is equal to?
Track the equality .
Solution
Answer
Exercises
Quick check
Let be a basis for , and let . Which old vector can be replaced immediately by u?
Look for a nonzero coefficient in the expression of u using the old basis.
Solution
Guided solution
Quick check
Use the example matrix to convert into .
Multiply .
Solution
Guided solution
Read this first
This note depends on 6.5 Basis and dimension and 6.4 Linear dependence and independence.