How to find rank of a matrix. easy ways?

The rank of a matrix is the maximum number of independent rows (or, the maximum number of independent columns).

1 2 0 3

1 7 3 0

0 0 4 8

2 4 0 6

Start with row 1: See if row 2, 3 or 4 are multiples. If so, cross that out.

Go to row 2: See if row 3 or 4 are multiples...

Finally compare row 3 and 4.

This is for 4 rows, but you can extend this to n rows.

The rank would be 3 in the example above because we crossed out a row, leaving 3. The last row is a multiple of the first.

The Rank of a Matrix

The maximum number of linearly independent rows in a matrix A is called the row rank of A, and the maximum number of linarly independent columns in A is called the column rank of A. If A is an m by n matrix, that is, if A has m rows and n columns, then it is obvious that

What is not so obvious, however, is that for any matrix A,

• the row rank of A = the column rank of A

Because of this fact, there is no reason to distinguish between row rank and column rank; the common value is simply called the rank of the matrix. Therefore, if A is m x n, it follows from the inequalities in (*) that

where min( m, n) denotes the smaller of the two numbers m and n (or their common value if m = n). For example, the rank of a 3 x 5 matrix can be no more than 3, and the rank of a 4 x 2 matrix can be no more than 2. A 3 x 5 matrix,

can be thought of as composed of three 5-vectors (the rows) or five 3-vectors (the columns). Although three 5-vectors could be linearly independent, it is not possible to have five 3-vectors that are independent. Any collection of more than three 3-vectors is automatically dependent. Thus, the column rank—and therefore the rank—of such a matrix can be no greater than 3. So, if A is a 3 x 5 matrix, this argument shows that

in accord with (**).

The process by which the rank of a matrix is determined can be illustrated by the following example. Suppose A is the 4 x 4 matrix

The fact that the vectors r3 and r4 can be written as linear combinations of the other two ( r1 and r2, which are independent) means that the maximum number of independent rows is 2. Thus, the row rank—and therefore the rank—of this matrix is 2.

The equations in (***) can be rewritten as follows:

The first equation here implies that if −2 times that first row is added to the third and then the second row is added to the (new) third row, the third row will be become 0, a row of zeros. The second equation above says that similar operations performed on the fourth row can produce a row of zeros there also. If after these operations are completed, −3 times the first row is then added to the second row (to clear out all entires below the entry a11 = 1 in the first column), these elementary row operations reduce the original matrix A to the echelon form

The fact that there are exactly 2 nonzero rows in the reduced form of the matrix indicates that the maximum number of linearly independent rows is 2; hence, rank A = 2, in agreement with the conclusion above. In general, then, to compute the rank of a matrix, perform elementary row operations until the matrix is left in echelon form; the number of nonzero rows remaining in the reduced matrix is the rank. [Note: Since column rank = row rank, only two of the four columns in A— c1, c2, c3, and c4—are linearly independent. Show that this is indeed the case by verifying the relations (and checking that c1 and c3 are independent). The reduced form of A makes these relations especially easy to see.]

Example 1: Find the rank of the matrix

First, because the matrix is 4 x 3, its rank can be no greater than 3. Therefore, at least one of the four rows will become a row of zeros. Perform the following row operations:

Since there are 3 nonzero rows remaining in this echelon form of B,

Example 2: Determine the rank of the 4 by 4 checkerboard matrix

Since r2 = r4 = −r1 and r3 = r1, all rows but the first vanish upon row-reduction:

Since only 1 nonzero row remains, rank C = 1.

Eigenvalues and Eigenvectors: An Introduction

The eigenvalue problem is a problem of considerable theoretical interest and wide-ranging application. For example, this problem is crucial in solving systems of differential equations, analyzing population growth models, and calculating powers of matrices (in order to define the exponential matrix). Other areas such as physics, sociology, biology, economics and statistics have focused considerable attention on "eigenvalues" and "eigenvectors"-their applications and their computations. Before we give the formal definition, let us introduce these concepts on an example.

Example. Consider the matrix

Consider the three column matrices

We have

In other words, we have

Next consider the matrix P for which the columns are C1, C2, and C3, i.e.,

We have det(P) = 84. So this matrix is invertible. Easy calculations give

Next we evaluate the matrix P-1AP. We leave the details to the reader to check that we have

In other words, we have

Using the matrix multiplication, we obtain

which implies that A is similar to a diagonal matrix. In particular, we have

for . Note that it is almost impossible to find A75 directly from the original form of A.

This example is so rich of conclusions that many questions impose themselves in a natural way. For example, given a square matrix A, how do we find column matrices which have similar behaviors as the above ones? In other words, how do we find these column matrices which will help find the invertible matrix P such that P-1AP is a diagonal matrix?

From now on, we will call column matrices vectors. So the above column matrices C1, C2, and C3 are now vectors. We have the following definition.

Definition. Let A be a square matrix. A non-zero vector C is called an eigenvector of A if and only if there exists a number (real or complex) such that

If such a number exists, it is called an eigenvalue of A. The vector C is called eigenvector associated to the eigenvalue .

Remark. The eigenvector C must be non-zero since we have

for any number .

Example. Consider the matrix

We have seen that

where

So C1 is an eigenvector of A associated to the eigenvalue 0. C2 is an eigenvector of A associated to the eigenvalue -4 while C3 is an eigenvector of A associated to the eigenvalue 3.

Computation of Eigenvalues

For a square matrix A of order n, the number is an eigenvalue if and only if there exists a non-zero vector C such that

Using the matrix multiplication properties, we obtain

This is a linear system for which the matrix coefficient is . We also know that this system has one solution if and only if the matrix coefficient is invertible, i.e. . Since the zero-vector is a solution and C is not the zero vector, then we must have

Example. Consider the matrix

The equation translates into

which is equivalent to the quadratic equation

Solving this equation leads to

In other words, the matrix A has only two eigenvalues.

In general, for a square matrix A of order n, the equation

will give the eigenvalues of A. This equation is called the characteristic equation or characteristic polynomial of A. It is a polynomial function in of degree n. So we know that this equation will not have more than n roots or solutions. So a square matrix A of order n will not have more than n eigenvalues.

Example. Consider the diagonal matrix

Its characteristic polynomial is

So the eigenvalues of D are a, b, c, and d, i.e. the entries on the diagonal.

This result is valid for any diagonal matrix of any size. So depending on the values you have on the diagonal, you may have one eigenvalue, two eigenvalues, or more. Anything is possible.

Remark. It is quite amazing to see that any square matrix A has the same eigenvalues as its transpose AT because

For any square matrix of order 2, A, where

the characteristic polynomial is given by the equation

The number (a+d) is called the trace of A (denoted tr(A)), and clearly the number (ad-bc) is the determinant of A. So the characteristic polynomial of A can be rewritten as

Let us evaluate the matrix

B = A2 - tr(A) A + det(A) I2.

We have

We leave the details to the reader to check that

In other word, we have

This equation is known as the Cayley-Hamilton theorem. It is true for any square matrix A of any order, i.e.

where is the characteristic polynomial of A.

We have some properties of the eigenvalues of a matrix.

Theorem. Let A be a square matrix of order n. If is an eigenvalue of A, then:

1.

is an eigenvalue of Am, for

2.

If A is invertible, then is an eigenvalue of A-1.

3.

A is not invertible if and only if is an eigenvalue of A.

4.

If is any number, then is an eigenvalue of .

5.

If A and B are similar, then they have the same characteristic polynomial (which implies they also have the same eigenvalues).

The next natural question to answer deals with the eigenvectors. In the next page, we will discuss the problem of finding eigenvectors..

The rank of a matrix is the maximum number of independent rows (or, the maximum number of independent columns).

1 2 0 3

1 7 3 0

0 0 4 8

2 4 0 6

Start with row 1: See if row 2, 3 or 4 are multiples. If so, cross that out.

Go to row 2: See if row 3 or 4 are multiples...

Finally compare row 3 and 4.

This is for 4 rows, but you can extend this to n rows.

The rank would be 3 in the example above because we crossed out a row, leaving 3. The last row is a multiple of the first.

The Rank of a Matrix

The maximum number of linearly independent rows in a matrix A is called the row rank of A, and the maximum number of linarly independent columns in A is called the column rank of A. If A is an m by n matrix, that is, if A has m rows and n columns, then it is obvious that

What is not so obvious, however, is that for any matrix A,

• the row rank of A = the column rank of A

Because of this fact, there is no reason to distinguish between row rank and column rank; the common value is simply called the rank of the matrix. Therefore, if A is m x n, it follows from the inequalities in (*) that

where min( m, n) denotes the smaller of the two numbers m and n (or their common value if m = n). For example, the rank of a 3 x 5 matrix can be no more than 3, and the rank of a 4 x 2 matrix can be no more than 2. A 3 x 5 matrix,

can be thought of as composed of three 5-vectors (the rows) or five 3-vectors (the columns). Although three 5-vectors could be linearly independent, it is not possible to have five 3-vectors that are independent. Any collection of more than three 3-vectors is automatically dependent. Thus, the column rank—and therefore the rank—of such a matrix can be no greater than 3. So, if A is a 3 x 5 matrix, this argument shows that

in accord with (**).

The process by which the rank of a matrix is determined can be illustrated by the following example. Suppose A is the 4 x 4 matrix

The fact that the vectors r3 and r4 can be written as linear combinations of the other two ( r1 and r2, which are independent) means that the maximum number of independent rows is 2. Thus, the row rank—and therefore the rank—of this matrix is 2.

The equations in (***) can be rewritten as follows:

The first equation here implies that if −2 times that first row is added to the third and then the second row is added to the (new) third row, the third row will be become 0, a row of zeros. The second equation above says that similar operations performed on the fourth row can produce a row of zeros there also. If after these operations are completed, −3 times the first row is then added to the second row (to clear out all entires below the entry a11 = 1 in the first column), these elementary row operations reduce the original matrix A to the echelon form

The fact that there are exactly 2 nonzero rows in the reduced form of the matrix indicates that the maximum number of linearly independent rows is 2; hence, rank A = 2, in agreement with the conclusion above. In general, then, to compute the rank of a matrix, perform elementary row operations until the matrix is left in echelon form; the number of nonzero rows remaining in the reduced matrix is the rank. [Note: Since column rank = row rank, only two of the four columns in A— c1, c2, c3, and c4—are linearly independent. Show that this is indeed the case by verifying the relations (and checking that c1 and c3 are independent). The reduced form of A makes these relations especially easy to see.]

Example 1: Find the rank of the matrix

First, because the matrix is 4 x 3, its rank can be no greater than 3. Therefore, at least one of the four rows will become a row of zeros. Perform the following row operations:

Since there are 3 nonzero rows remaining in this echelon form of B,

Example 2: Determine the rank of the 4 by 4 checkerboard matrix

Since r2 = r4 = −r1 and r3 = r1, all rows but the first vanish upon row-reduction:

Since only 1 nonzero row remains, rank C = 1.

Eigenvalues and Eigenvectors: An Introduction

The eigenvalue problem is a problem of considerable theoretical interest and wide-ranging application. For example, this problem is crucial in solving systems of differential equations, analyzing population growth models, and calculating powers of matrices (in order to define the exponential matrix). Other areas such as physics, sociology, biology, economics and statistics have focused considerable attention on "eigenvalues" and "eigenvectors"-their applications and their computations. Before we give the formal definition, let us introduce these concepts on an example.

Example. Consider the matrix

Consider the three column matrices

We have

In other words, we have

Next consider the matrix P for which the columns are C1, C2, and C3, i.e.,

We have det(P) = 84. So this matrix is invertible. Easy calculations give

Next we evaluate the matrix P-1AP. We leave the details to the reader to check that we have

In other words, we have

Using the matrix multiplication, we obtain

which implies that A is similar to a diagonal matrix. In particular, we have

for . Note that it is almost impossible to find A75 directly from the original form of A.

This example is so rich of conclusions that many questions impose themselves in a natural way. For example, given a square matrix A, how do we find column matrices which have similar behaviors as the above ones? In other words, how do we find these column matrices which will help find the invertible matrix P such that P-1AP is a diagonal matrix?

From now on, we will call column matrices vectors. So the above column matrices C1, C2, and C3 are now vectors. We have the following definition.

Definition. Let A be a square matrix. A non-zero vector C is called an eigenvector of A if and only if there exists a number (real or complex) such that

If such a number exists, it is called an eigenvalue of A. The vector C is called eigenvector associated to the eigenvalue .

Remark. The eigenvector C must be non-zero since we have

for any number .

Example. Consider the matrix

We have seen that

where

So C1 is an eigenvector of A associated to the eigenvalue 0. C2 is an eigenvector of A associated to the eigenvalue -4 while C3 is an eigenvector of A associated to the eigenvalue 3.

Computation of Eigenvalues

For a square matrix A of order n, the number is an eigenvalue if and only if there exists a non-zero vector C such that

Using the matrix multiplication properties, we obtain

This is a linear system for which the matrix coefficient is . We also know that this system has one solution if and only if the matrix coefficient is invertible, i.e. . Since the zero-vector is a solution and C is not the zero vector, then we must have

Example. Consider the matrix

The equation translates into

which is equivalent to the quadratic equation

Solving this equation leads to

In other words, the matrix A has only two eigenvalues.

In general, for a square matrix A of order n, the equation

will give the eigenvalues of A. This equation is called the characteristic equation or characteristic polynomial of A. It is a polynomial function in of degree n. So we know that this equation will not have more than n roots or solutions. So a square matrix A of order n will not have more than n eigenvalues.

Example. Consider the diagonal matrix

Its characteristic polynomial is

So the eigenvalues of D are a, b, c, and d, i.e. the entries on the diagonal.

This result is valid for any diagonal matrix of any size. So depending on the values you have on the diagonal, you may have one eigenvalue, two eigenvalues, or more. Anything is possible.

Remark. It is quite amazing to see that any square matrix A has the same eigenvalues as its transpose AT because

For any square matrix of order 2, A, where

the characteristic polynomial is given by the equation

The number (a+d) is called the trace of A (denoted tr(A)), and clearly the number (ad-bc) is the determinant of A. So the characteristic polynomial of A can be rewritten as

Let us evaluate the matrix

B = A2 - tr(A) A + det(A) I2.

We have

We leave the details to the reader to check that

In other word, we have

This equation is known as the Cayley-Hamilton theorem. It is true for any square matrix A of any order, i.e.

where is the characteristic polynomial of A.

We have some properties of the eigenvalues of a matrix.

Theorem. Let A be a square matrix of order n. If is an eigenvalue of A, then:

1.

is an eigenvalue of Am, for

2.

If A is invertible, then is an eigenvalue of A-1.

3.

A is not invertible if and only if is an eigenvalue of A.

4.

If is any number, then is an eigenvalue of .

5.

If A and B are similar, then they have the same characteristic polynomial (which implies they also have the same eigenvalues).

The next natural question to answer deals with the eigenvectors. In the next page, we will discuss the problem of finding eigenvectors..