MS Research PhD Research Curriculum Vitae
Online Stores Cycling Medicine & Health LaTeX OOP & C++ Sony PCMR500 DAT 
Next: Inverting matrices using simple Up: Solving the generalized eigenvalue Previous: The fractional iteration method Contents Index

(2.11) 
The system (2.14) can be solved in many ways; some of the direct methods are listed below
Of the three methods listed above the first one is discarded at once for efficiency reasons, both in regard to storage and computing time. This leaves us with the two last methods in the list. Of these two, the block factorization is the simplest to implement, especially if we want to be able to solve a general G group problem. Furthermore, there is no indication that a bandsolver should perform better than a blocksolver.
In [8] it is demonstrated that a simple forward eliminationbackward substitution is successful (neglecting rounding errors for the moment) if all minors of are nonsingular. According to Theorem 1.4 this is in fact guaranteed in our case, provided when we use the fractional iteration method.We are therefore going to solve (2.14) directly (ie without iteration) by means of the simple forward eliminationbackward substitution given below (see [8])

Going through the algorithm we observe that the only nontrivial calculation occurring involves inversion of matrices. This subject is treated in the next section.
Before we discuss the problem of inverting matrices we make a halt because it
may seem strange that we are not just solving the matrix equations for
instead.
Why not solve the matrix equation
In regard to code readability it is clear that the method which involves inversion produces the most readable code since it is possible to maintain a onetoone correspondence between the algorithm and the code.
In regard to the efficiency we observe that in the general case an inversion and matrixmatrix multiplication are roughly 2 times more costly in terms of simple arithmetic operations than a LUdecomposition when solving (2.17). However, in our case the matrices are of diagonal type and this can be exploited by the matrix inversion method. When we sum up the total consumption of simple arithmetic operations (SAO) for the two variants of blocksolvers we end up with the following expressions


In order to compare the efficiency of the two methods we define the efficiency
gain, ,
as follows
(2.13) 
It is known [10, p. 171] that we cannot guarantee the stability of the scheme (2.16) in regard to rounding errors; not even if (2.17) is solved using full pivoting. In other words, there is a possibility of obtaining a "solution" completely destroyed by rounding errors. In any case it might be of use to calculate the residual vector, , belonging to the numerical solution . The residual vector is defined by^{2.4}
