image Strona Główna       image SKFAB00GBB       image ceelt smp       image Artykul1       image ArmyBeasts       image 2006 nov p3       

Odnośniki

[ Pobierz całość w formacie PDF ]

x6 190
568 2.99 .
190 1.00
With x 2.99, 1 as the approximation of a dominant eigenvector of A, use the Rayleigh
quotient to obtain an approximation of the dominant eigenvalue of A. First compute the
product Ax.
SECTION 10.3 POWER METHOD FOR APPROXIMATING EIGENVALUES 589
Ax
2 12 2.99 6.02
1 5 1.00 2.01
Then, because
Ax x 6.02 2.99 2.01 1 20.0
and
x x 2.99 2.99 1 1 9.94,
you can compute the Rayleigh quotient to be
Ax x 20.0
2.01
x x 9.94
which is a good approximation of the dominant eigenvalue 2.
From Example 2 you can see that the power method tends to produce approximations
with large entries. In practice it is best to  scale down each approximation before pro-
ceeding to the next iteration. One way to accomplish this scaling is to determine the com-
ponent of Axi that has the largest absolute value and multiply the vector Axi by the
reciprocal of this component. The resulting vector will then have components whose
absolute values are less than or equal to 1. (Other scaling techniques are possible. For
examples, see Exercises 27 and 28.)
EXAMPLE 4 The Power Method with Scaling
Calculate seven iterations of the power method with scaling to approximate a dominant
eigenvector of the matrix
1 2 0
A 2 1 2 .
1 3 1
Use x0 1, 1, 1 as the initial approximation.
Solution One iteration of the power method produces
1 2 0 1 3
Ax0 2 1 2 1 1 ,
1 3 1 1 5
and by scaling you obtain the approximation
3 0.60
1
x1 1 0.20 .
5
5 1.00
590 CHAPTER 10 NUMERICAL METHODS
A second iteration yields
1 2 0 0.60 1.00
Ax1 2 1 2 0.20 1.00
1 3 1 1.00 2.20
and
1.00 0.45
1
x2 1.00 0.45 .
2.20
2.20 1.00
Continuing this process, you obtain the sequence of approximations shown in Table 10.6.
TABLE 10.6
x0 x1 x2 x3 x4 x5 x6 x7
1.00 0.60 0.45 0.48 0.51 0.50 0.50 0.50
1.00 0.20 0.45 0.55 0.51 0.49 0.50 0.50
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
From Table 10.6 you can approximate a dominant eigenvector of A to be
0.50
x 0.50 .
1.00
Using the Rayleigh quotient, you can approximate the dominant eigenvalue of A to be
3. (For this example you can check that the approximations of x and are exact.)
REMARK: Note that the scaling factors used to obtain the vectors in Table 10.6,
x1 x2 x3 x4 x5 x6 x7
“!“! “! “! “! “! “!
5.00 2.20 2.82 3.13 3.02 2.99 3.00,
are approaching the dominant eigenvalue 3.
In Example 4 the power method with scaling converges to a dominant eigenvector. The
following theorem states that a sufficient condition for convergence of the power method is
that the matrix A be diagonalizable (and have a dominant eigenvalue).
If A is an n n diagonalizable matrix with a dominant eigenvalue, then there exists a
Theorem 10.3
nonzero vector x0 such that the sequence of vectors given by
Convergence of the
Ax0, A2x0, A3x0, A4x0, . . . , Akx0, . . .
Power Method
approaches a multiple of the dominant eigenvector of A.
SECTION 10.3 POWER METHOD FOR APPROXIMATING EIGENVALUES 591
Proof Because A is diagonalizable, you know from Theorem 7.5 that it has n linearly independent
eigenvectors x1, x2, . . . , xn with corresponding eigenvalues of , , . . . , . Assume that
1 2 n
these eigenvalues are ordered so that is the dominant eigenvalue (with a corresponding
1
eigenvector of x1). Because the n eigenvectors x1, x2, . . . , xn are linearly independent, they
must form a basis for Rn. For the initial approximation x0, choose a nonzero vector such
that the linear combination
. . .
x0 c1x1 c2x2 cnxn
has nonzero leading coefficients. (If c1 0, the power method may not converge, and a
different x0 must be used as the initial approximation. See Exercises 21 and 22.) Now,
multiplying both sides of this equation by A produces
. . .
Ax0 A c1x1 c2x2 cnxn
. . .
c1 Ax1 c2 Ax2 cn Axn
. . .
c1 x1 c2 x2 cn xn .
1 2 n
Repeated multiplication of both sides of this equation by A produces
k k . . . k
Akx0 c1 x1 c2 x2 cn xn ,
1 2 n
which implies that
k k
k
. . .
Akx0 x1 c2 2 x2 cn n xn
1 1 .
c
1 1
Now, from the original assumption that is larger in absolute value than the other eigen-
1
values it follows that each of the fractions
2 3 n
, , . . . ,
1 1 1
is less than 1 in absolute value. So each of the factors
k k k
2 3 n
, , . . . ,
1 1 1
must approach 0 as k approaches infinity. This implies that the approximation
k
Akx0 c1x1, c1 0
1
improves as k increases. Because x1 is a dominant eigenvector, it follows that any scalar
multiple of x1 is also a dominant eigenvector, so showing that Akx0 approaches a multiple
of the dominant eigenvector of A.
The proof of Theorem 10.3 provides some insight into the rate of convergence of the
power method. That is, if the eigenvalues of A are ordered so that
> e" e" . . . e"
1 2 3 n ,
592 CHAPTER 10 NUMERICAL METHODS
then the power method will converge quickly if is small, and slowly if
2 1
is close to 1. This principle is illustrated in Example 5.
2 1
EXAMPLE 5 The Rate of Convergence of the Power Method
(a) The matrix
A
4 5
6 5
has eigenvalues of 10 and 1. So the ratio is 0.1. For this matrix,
1 2 2 1
only four iterations are required to obtain successive approximations that agree when
rounded to three significant digits. (See Table 10.7.)
TABLE 10.7
x0 x1 x2 x3 x4
1.000 0.818 0.835 0.833 0.833
1.000 1.000 1.000 1.000 1.000
(b) The matrix
A
4 10
7 5
has eigenvalues of 10 and 9. For this matrix, the ratio is 0.9,
1 2 2 1
and the power method does not produce successive approximations that agree to three
significant digits until sixty-eight iterations have been performed, as shown in Table 10.8.
TABLE 10.8
x0 x1 x2 x66 x67 x68
. . .
1.000 0.500 0.941 0.715 0.714 0.714
. . .
1.000 1.000 1.000 1.000 1.000 1.000
In this section you have seen the use of the power method to approximate the dominant
eigenvalue of a matrix. This method can be modified to approximate other eigenvalues
through use of a procedure called deflation. Moreover, the power method is only one of
several techniques that can be used to approximate the eigenvalues of a matrix. Another
popular method is called the QR algorithm.
This is the method used in most computer programs and calculators for finding eigen-
values and eigenvectors. The algorithm uses the QR factorization of the matrix, as pre-
sented in Chapter 5. Discussions of the deflation method and the QR algorithm can be
found in most texts on numerical methods.
SECTION 10.3 EXERCISES 593
SECTION 10.3 Q' EXERCISES
In Exercises 1 6, use the techniques presented in Chapter 7 to find In Exercises 19 and 20, the matrix A does not have a dominant
the eigenvalues of the matrix A. If A has a dominant eigenvalue, find eigenvalue. Apply the power method with scaling, starting with
a corresponding dominant eigenvector. x0 1, 1, 1 , and observe the results of the first four iterations.
1
1 1 0 1 2 2
1. A 2. A
2 4 3 0
0 1 3
19. A 3 1 0 20. A 2 5 2
0 0 2 6 6 3
1
3. A 4. A
3 5 4 5
1 2 3
21. Writing (a) Find the eigenvalues and corresponding eigen-
vectors of
2 3 1 5 0 0
3
5. A 0 1 2 6. A 3 7 0
A
2 1 .
4
0 0 3 4 2 3 [ Pobierz całość w formacie PDF ]

  • zanotowane.pl
  • doc.pisz.pl
  • pdf.pisz.pl
  • blacksoulman.xlx.pl