JakobCHR.com
 
Quick Navigation:
 
Personal:
 Go to Home
 MS Research
 PhD Research
 Curriculum Vitae

General:
 Linux

Soon to come:
 Matlab
 On-line Stores
 Cycling
 Medicine & Health
 LaTeX
 OOP & C++
 Sony PCM-R500 DAT


next up previous contents index
Next: Inverting matrices using simple Up: Solving the generalized eigenvalue Previous: The fractional iteration method   Contents   Index


Solution of block-tri-diagonal systems

As seen in the two iteration schemes both methods require that we solve systems of linear equations. Furthermore we identify that the systems we have to solve have the following form

$\textstyle \parbox{16cm}{ \begin{eqnarray*}
\parbox[t]{4cm}{Power method: }\pa...
...ace{0.15ex} = \hspace{0.2ex}\underline{k}{}\hspace{0.15ex}$}
\end{eqnarray*}}$
With reference to section 1.7.1 we observe that

.
$\hspace{0.2ex}\underline{\underline{B}}{}\hspace{0.15ex}$ is a (G x G) block-tri-diagonal matrix.
.
$\hspace{0.2ex}\underline{\underline{C}}{}\hspace{0.15ex}$ is a (G x G) block-diagonal matrix.
In an implementation situation we therefore need a procedure for solving block-tri-diagonal systems of equations (together with procedures for matrix computations).

Let us consider a block-tri-diagonal system of linear equations with N blocks given by

\begin{displaymath}
\left[
\begin{array}{cccccc}
\hspace{0.2ex}\underline{\un...
...ce{0.2ex}\underline{k}{}\hspace{0.15ex}_N
\end{array} \right]
\end{displaymath} (2.10)

or in a more compact form

\begin{displaymath}
\hspace{0.2ex}\underline{\underline{M}}{}\hspace{0.15ex} \h...
...\hspace{0.15ex} = \hspace{0.2ex}\underline{k}{}\hspace{0.15ex}
\end{displaymath} (2.11)

In our situation we only consider the case where the sub-matrices $\hspace{0.2ex}\underline{\underline{A}}{}\hspace{0.15ex}_n$, $\hspace{0.2ex}\underline{\underline{B}}{}\hspace{0.15ex}_n$ and $\hspace{0.2ex}\underline{\underline{C}}{}\hspace{0.15ex}_n$2.3 are square and of the same order G (the number of energy groups). Consequently, the column vectors $\hspace{0.2ex}\underline{x}{}\hspace{0.15ex}_n$ and $\hspace{0.2ex}\underline{k}{}\hspace{0.15ex}_n$ have order G.

The system (2.14) can be solved in many ways; some of the direct methods are listed below

.
Gaussian elimination on the full coefficient matrix.
.
Using a general band Gaussian elimination (or factorization).
.
Block LU factorization.
In addition, a number of iterative methods, which also exploit the sparsity of the coefficient matrix, can be used. The advantage of these methods is that they require less storage that the direct methods, but as far as the author knows they are in general slower than direct methods.

Of the three methods listed above the first one is discarded at once for efficiency reasons, both in regard to storage and computing time. This leaves us with the two last methods in the list. Of these two, the block factorization is the simplest to implement, especially if we want to be able to solve a general G group problem. Furthermore, there is no indication that a band-solver should perform better than a block-solver.

In [8] it is demonstrated that a simple forward elimination-backward substitution is successful (neglecting rounding errors for the moment) if all minors of $\hspace{0.2ex}\underline{\underline{M}}{}\hspace{0.15ex}$ are non-singular. According to Theorem 1.4 this is in fact guaranteed in our case, provided $\kappa > \lambda_0$ when we use the fractional iteration method.We are therefore going to solve (2.14) directly (ie without iteration) by means of the simple forward elimination-backward substitution given below (see [8])


\begin{eqnarray*}
\parbox[t]{4.5cm}{1) Forward elimination:}\parbox[t]{15cm}{$\...
...pace{0.15ex}_{n+1}, \hspace{1.5cm} n=\{(N-1),(N-2),\ldots,1\}$}
\end{eqnarray*}


$\textstyle \parbox{1.50cm}{\begin{eqnarray}
\end{eqnarray}}$

Going through the algorithm we observe that the only non-trivial calculation occurring involves inversion of matrices. This subject is treated in the next section.

Before we discuss the problem of inverting matrices we make a halt because it may seem strange that we are not just solving the matrix equations for $\hspace{0.2ex}\underline{\underline{H}}{}\hspace{0.15ex}_n$ instead. Why not solve the matrix equation

\begin{displaymath}
\left[ \hspace{0.2ex}\underline{\underline{B}}{}\hspace{0.1...
...n = \hspace{0.2ex}\underline{\underline{C}}{}\hspace{0.15ex}_n
\end{displaymath} (2.12)

for the unknown matrix $\hspace{0.2ex}\underline{\underline{H}}{}\hspace{0.15ex}_n$ using a LU-decomposition, one may ask. Before taking the decision we consider two factors
.
Efficiency in terms of total consumption of simple arithmetic operations.
.
Code readability.

In regard to code readability it is clear that the method which involves inversion produces the most readable code since it is possible to maintain a one-to-one correspondence between the algorithm and the code.

In regard to the efficiency we observe that in the general case an inversion and matrix-matrix multiplication are roughly 2 times more costly in terms of simple arithmetic operations than a LU-decomposition when solving (2.17). However, in our case the matrices $\hspace{0.2ex}\underline{\underline{C}}{}\hspace{0.15ex}_n$ are of diagonal type and this can be exploited by the matrix inversion method. When we sum up the total consumption of simple arithmetic operations (SAO) for the two variants of block-solvers we end up with the following expressions


\begin{eqnarray*}
{\mbox{SAO}}_{{\mbox{\protect\scriptsize inversion}}} &=& (N-...
... (M-1)NG(2G-1) \\
& & {}+ M(N-1)\left\{ G + G(2G-1) \right\}
\end{eqnarray*}


$\textstyle \parbox{1.50cm}{\begin{eqnarray}
\end{eqnarray}}$

and


\begin{eqnarray*}
{\mbox{SAO}}_{{\mbox{\protect\scriptsize LU-fact}}} &=& (N-1)...
... (M-1)NG(2G-1) \\
& & {}+ M(N-1)\left\{ G + G(2G-1) \right\}
\end{eqnarray*}


$\textstyle \parbox{1.50cm}{\begin{eqnarray}
\end{eqnarray}}$

where N is the number of blocks (which corresponds to the number of internal grid points), G is the order of the block matrices (which corresponds to the number of neutron energy groups) and M is the number of times the block-solver is called.

In order to compare the efficiency of the two methods we define the efficiency gain, ${\mbox{EG}}$, as follows

\begin{displaymath}
{\mbox{EG}} \;\hbox{$=$\kern-0.68em\raise1.1ex
\hbox{$\scr...
...act}}}}{{\mbox{SAO}}_{{\mbox{\protect\scriptsize inversion}}}}
\end{displaymath} (2.13)

In Figure 2.1 the efficiency gain factor is plotted for N=100 and M=10 and M=20 which are reasonable values in our case. As we can see from the figure the efficiency loss accompaning the inversion method choice is so small that the author believes the price of obtaining the most readable code is justified. However, the author must admit that it would have been more in the spirit of a true numerical analyst to store the LU-factorization instead of the inverted matrices!

\begin{figure}
% latex2html id marker 10767\rule{\textwidth}{0.2mm}
\rule{0cm}...
...\ and $M=10$\ (\mbox{---}) and $M=20$\ (\raise0.5ex\hbox{$.....$}).}\end{figure}

It is known [10, p. 171] that we cannot guarantee the stability of the scheme (2.16) in regard to rounding errors; not even if (2.17) is solved using full pivoting. In other words, there is a possibility of obtaining a "solution" completely destroyed by rounding errors. In any case it might be of use to calculate the residual vector, $\hspace{0.2ex}\underline{r}{}\hspace{0.15ex}$, belonging to the numerical solution $\hat{\hspace{0.2ex}\underline{x}{}\hspace{0.15ex}}$. The residual vector is defined by2.4


\begin{displaymath}
\hspace{0.2ex}\underline{r}{}\hspace{0.15ex} \;\hbox{$=$\ke...
...ace{0.15ex} \hat{\hspace{0.2ex}\underline{x}{}\hspace{0.15ex}}
\end{displaymath} (2.14)

In order to investigate the accuracy of $\hat{\hspace{0.2ex}\underline{x}{}\hspace{0.15ex}}$ an implementation of the scheme (2.16) might return the infinity norm of the residual vector, $\Vert \hspace{0.2ex}\underline{r}{}\hspace{0.15ex} \Vert _{\infty}$.




next up previous contents index
Next: Inverting matrices using simple Up: Solving the generalized eigenvalue Previous: The fractional iteration method   Contents   Index  
 
 
 
Revision 2.0, Copyright © 1999-2004 Jakob Christensen
http://www.JakobCHR.com
E-Mail: webmaster@JakobCHR.com
Top Quality
Developed with

Danish
Brain Power
Linux Powered!