This lecture defines the concept of row equivalence and proves some propositions about row equivalent matrices that lie at the heart of many important results in linear algebra.

You are watching: What does it mean to be row equivalent

*


Definition Let and be two matrices. We say that is row equivalent to if and only if there exist elementary matrices " style="background-position:0px -686px;vertical-align:-5px"/> such that" />


Remember that pre-multiplying by an elementary matrix is the same as performing an elementary row operation on . Therefore, is row equivalent to if and only if can be transformed into by performing a sequence of elementary row operations on .

Equivalence relation

Row equivalence is an equivalence relation because it is:

symmetric: if is row equivalent to , then is row equivalent to ;

transitive: if is equivalent to and is equivalent to , then is equivalent to ;

reflexive: is equivalent to itself.


Proof

Suppose is row equivalent to . Since an elementary matrix is invertible and its inverse is an elementary matrix, we have that

*
" />where " style="background-position:0px -554px;vertical-align:-5px"/> are elementary matrices. Therefore, is equivalent to . This proves symmetry. If is equivalent to and is equivalent to , then" />and
*
" />where " style="background-position:0px -686px;vertical-align:-5px"/> and " style="background-position:0px -711px;vertical-align:-5px"/> are elementary matrices. Now, pre-multiply both sides of the first equation by " style="background-position:0px -662px;vertical-align:-5px"/>:
*
" />Then, is equivalent to , that is, row equivalence is transitive. Finally, for any elementary matrix , we can write" style="background-position:0px -103px;"/>Since is elementary, this means that we can transform into itself by means of elementary row operations. As a consequence, row equivalence is reflexive.


Column correspondence property

The next proposition states an important property of row equivalence, known as column correspondence property.


Proposition Let and be two matrices. Let be row equivalent to . Denote by and the -th columns of and respectively. Then, " style="background-position:0px -580px;"/>for an vector if and only if" style="background-position:0px -621px;"/>


Proof

Since is row equivalent to , we have that" style="background-position:0px -735px;"/>where is a product of elementary matrices:" style="background-position:0px -361px;"/>Furthermore, by the very definition of matrix product (see also here):" style="background-position:0px -478px;"/>Thus, we can substitute" style="background-position:0px -516px;"/>and" style="background-position:0px -440px;"/>in the equation" style="background-position:0px -580px;"/>so as to obtain" style="background-position:0px -1px;"/>By pre-multiplying both sides by , we get" style="background-position:0px -621px;"/>Thus, we have proved that implies . The opposite implication ( implies ), can be proved analogously.


In other words, when and are row equivalent, the -th column of can be written as a linear combination of a given set of columns of itself, with coefficients taken from the vector , if and only if the -th column of is a linear combination of the corresponding set of columns of , with coefficients taken from the same vector .

A useful corollary of the previous proposition follows.


Proposition Let and be two row equivalent matrices. Then, a set of columns of is linearly independent if and only if the corresponding set of columns of is linearly independent.


Proof

The proof is by contradiction. Suppose that a set of columns of is linearly independent, but the corresponding columns of are linearly dependent. It follows that a column can be written as a linear combination of other columns:" style="background-position:0px -621px;"/>where . In particular, there are some non-zero entries of corresponding to the columns in the set we are considering. But by the previous proposition, this implies that " style="background-position:0px -580px;"/>In other words, the set of columns of is not linearly independent, a contradiction. Therefore, if a set of columns of is linearly independent, then the corresponding columns of must be linearly independent. The opposite implication can be proved analogously.


Dominant columns

This section introduces the concept of dominant columns, which will be used below to study the properties of row equivalent matrices.


Definition Let be a matrix. Denote its -th column by . We say that is a dominant column if and only if it cannot be written as a linear combination of the columns to its left.


A first simple result about dominant columns follows.


Proposition Two equivalent matrices and have the same set of dominant columns, that is, the set of indices of the dominant columns of coincides with the set of indices of the dominant columns of .


Proof

Suppose is a dominant column of . Then, there is no vector

*
" />such that " style="background-position:0px -580px;"/>By the column correspondence property above, this is possible if and only if there is no such vector satisfying" style="background-position:0px -621px;"/>As a consequence, cannot be written as a linear combination of the columns to its left. Hence it is dominant. We have just proved that is dominant only if is dominant. The proof of the opposite implication is analogous. This holds for any . Therefore, the columns that are dominant in are dominant also in and vice versa.


For instance, if the dominant columns of are the second, third and fifth, then the dominant columns of are the second, third and fifth.

Row equivalent matrices in reduced row echelon form

The propositions above allow us to prove some properties of matrices in reduced row echelon form.

Remember that a matrix is in reduced row echelon form (RREF) if and only if:

all its non-zero rows contain an element, called pivot, that is equal to 1 and has only zero entries in the quadrant below it and to its left;

each pivot is the only non-zero element in its column;

all the zero rows (if there are any) are below the non-zero rows.

Furthermore, the Gauss-Jordan elimination algorithm can be used to transform any matrix into an RREF matrix by elementary row operations. Therefore, any matrix is row equivalent to an RREF matrix.

Remember that a basic column is a column containing a pivot, while a non-basic column does not contain any pivot.

The basic columns of an RREF matrix are vectors of the canonical basis, that is, they have one entry equal to 1 and all the other entries equal to zero. Furthermore, if an RREF matrix has basic columns, then those columns are the first vectors of the canonical basis, as stated by the following proposition.


Proposition Let be a matrix in reduced row echelon form. Then, the -th basic column of , counting from the left, is equal to the -th vector of the canonical basis, that is, it has a 1 in position and all its other entries are equal to 0.


Proof

By the definition of RREF matrix, the basic columns of are vectors of the canonical basis (they have one entry equal to 1 and all other entries equal to 0). Furthermore, all non-zero rows contain a pivot. Therefore, the -th basic column contains the -th pivot, which is located on the -th row. In other words, the pivot, which is equal to 1, is the -th entry of the -th basic column.


If a column is non-basic, that is, it has no pivot, then it can be written as" style="background-position:0px -179px;"/>where is the number of basic columns to its left (the entries below the -th must be zero because the -th pivot, with k$" style="background-position:0px -955px;vertical-align:-5px"/>, has only 0s to its left). Therefore, the non-basic column can be written as a linear combination of the columns to its left. For example, if and the first, third and fourth columns are basic, then

*
" />Thus, if a column is non-basic it is not linearly independent from the columns to its left. Hence, it is not a dominant column.


By combining the two simple propositions above, we get the following one.


Proposition If a matrix is in reduced row echelon form, then one of its columns is basic if and only if it is dominant, and it is non-basic if and only if it is not dominant.


Proof

We have already explained that any matrix is row equivalent to a matrix in reduced row echelon form which can be derived by using the Gauss-Jordan elimination algorithm. We need to prove uniqueness. Suppose that two matrices and are in reduced row echelon form and that they are both row equivalent to . Since row equivalence is transitive and symmetric, and are row equivalent. Therefore, the positions of their dominant columns coincide. Equivalently, the positions of their basic columns coincide. But we have proved above that the -th basic column of an RREF matrix, counting from the left, is equal to the -th vector of the canonical basis. Therefore, not only the basic columns of and have the same positions, but their corresponding entries coincide. The non-basic columns are linear combinations of the basic ones. By the column correspondence property above, the coefficients of the linear combinations are the same for and . But also the vectors being combined linearly coincide because the basic columns of and coincide. As a consequence, each non-basic column of is equal to the corresponding non-basic column of . Thus, , which proves that the row equivalent RREF of a matrix is unique.


A consequence of this uniqueness result is that if two matrices are row equivalent, then they are equivalent to the same RREF matrix.


Proposition Let be row equivalent to . Then, and are equivalent to the same RREF matrix .


Proof

Denote by and the RREF matrices that are row equivalent to and respectively:" style="background-position:0px -39px;"/>where and are products of elementary matrices. Furthermore, is row equivalent to , so that" style="background-position:0px -141px;"/>where is a product of elementary matrices. We pre-multiply both sides of eq. (3) by , so as to get

*
" />Since is a product of elementary matrices, is an RREF matrix row equivalent to . But the RREF row equivalent matrix is unique. Therefore, .


Proposition Let be a invertible matrix. Then, is row equivalent to the identity matrix .


By the results above, we know that is row equivalent to a unique RREF matrix . Furthermore, can be transformed into by elementary row operations, that is, by pre-multiplying by an invertible matrix (equal to the product of the elementary matrices used to perform the row operations):" style="background-position:0px -773px;"/>But we know that pre-multiplication by an invertible (i.e., full-rank) matrix does not alter the rank. Therefore, is full-rank. As a consequence, all the columns of are basic (there cannot be non-basic columns because the columns of a full-rank matrix are all linearly independent from each other). But this means that the columns of are the vectors of the canonical basis of the space of -dimensional vectors. In other words, they are the columns of the identity matrix. Hence, .


Clearly, since the identity matrix is a matrix in reduced row echelon form, any invertible matrix is equivalent to the unique RREF matrix .

An immediate consequence of the previous proposition follows.


Proposition Let be a invertible matrix. Then, can be written as a product of elementary matrices:" style="background-position:0px -399px;"/>where " style="background-position:0px -686px;vertical-align:-5px"/> are elementary matrices.


Proof

By the previous proposition, the identity matrix is row equivalent to . So, by the definition of row equivalent matrix, we have that

*
" />where " style="background-position:0px -686px;vertical-align:-5px"/> are elementary matrices.


Proposition Let be an RREF matrix that is row equivalent to a matrix . Then and have the same rank. The rank is equal to 1) the number of non-zero rows of or, equivalently, to 2) the number of basic columns of .


First of all, remember that pre-multiplying a matrix by an invertible matrix does not change the rank of . As a consequence, if (an invertible product of elementary matrices) transforms into a row equivalent RREF matrix , we have that

*
" />The rank of is equal to the maximum number of linearly independent columns of . The basic columns of are linearly independent, while the non-basic columns can be written as linear combinations of the basic ones. Therefore, the rank of is equal to the number of basic columns of . Furthermore, each basic columns contains a pivot and each non-zero row contains a pivot. As a consequence, the rank is also equal to the number of non-zero rows of .

See more: Who Is Queen Thorn In Wings Of Fire Queen Thorn On Tumblr, Smolder And Thorn


How to cite

Please cite as:

Taboga, Marco (2017). "Row equivalence", Lectures on matrix algebra. https://www.urbanbreathnyc.com/matrix-algebra/row-equivalence.