the Linear System of Equations (LSEs): (I)⎩⎪⎪⎪⎪⎨⎪⎪⎪⎪⎧E1:E2:...En:a11x1+a12x2+...+a1nxn=b1a21x1+a22x2+...+a2nxn=b2an1x1+an2x2+...+annxn=bn
6.1.2 Operations of LSEs
Multiplied operation - 数乘
Equation Ei can be multiplied by any nonzero constant λ (λEi)→Ei
Multiplied and added operation - 倍加
Equation Ej can be multiplied by any nonzero constant λ, and added to Equation Ei in place of Ei, denoted by (λEj+Ei)→Ei
Transposition - 交换
Equation Ei and Ej can be transposed in order, denoted by Ei↔Ej
The key point of Gaussian Elimination Method is changing the original matrix into upper-triangular matrix, then using backward–substitution method to calculate the answer.
6.2.2 Algorithm
INPUT: N-dimension, A(N,N),B(N)
OUTPUT: Solution x(N) or Message that LESs has no unique solution.
Step 1: For k=1,2,...,N−1, do step 2-4.
Step 2: Set p be the smallest integer with k≤p≤N and Ap,k=0. If no p can be found, output: “no unique solution exists”; stop.
Step 3: If p=k, do transposition Ep↔Ek.
Step 4: For i=k+1,...,N
Set mi,k=A(k,k)A(i,k)
Set B(i)=B(i)−mi,kB(k)
For j=k+1,...,N, set A(i,j)=A(i,j)−mi,kA(k,j);
Step 5: If A(N,N)=0, set x(N)=A(N,N)B(N); Else, output:“no unique solution exists.”
Step 6: For i=N−1,N−2,...,1, set X(i)=[B(i)−j=i+1∑NA(i,j)x(j)]/A(i,i)
Step 7: Output the solution x(N).
6.3 Pivoting Strategies
6.3.1 Background
According to the process of Gaussian Elimination Method, We find that if akk(k−1) is too small, the roundoff error will be larger. mi,k=A(k,k)A(i,k)X(i)=[B(i)−j=i+1∑NA(i,j)x(j)]/A(i,i)
Therefore, in order to reduce the roundoff error, we need to make the value of akk(k−1) larger.
6.3.2 Maximal Column Pivoting Technique
This method is to make akk(k−1) equal to the maximal value in its column.
6.3.3 Maximal Row Pivoting Technique
This method is to make akk(k−1) equal to the maximal value in its row.
6.3.4 Partial Pivoting Technique
This method is to make akk(k−1) equal to the maximal value in its remaining area.
6.3.5 Scaled Partial Pivoting Technique
si=1≤j≤nmax∣ai,j∣skakk=k≤i≤nmaxsiai,1
This method is to make skakk(k−1) equal to the maximal value in its remaining area.
Solve Ly=b determining y with forward substitution method.
Solve Ux=y determining x with forward substitution method.
6.4.2 LU Factorization through Gaussian Elimination
Theorem
If Gaussian elimination can be performed on the linear system Ax=b without row interchanges, then the matrix A can be factored into the product of a lower-triangular L and an upper-triangular matrix U, A=LU
where L=⎝⎜⎜⎜⎜⎛1m21m31...mn101m32...mn2001...mn3...............000...1⎠⎟⎟⎟⎟⎞,R=⎝⎜⎜⎜⎜⎛a11100...0a121a2220...0a131a232a333...0...............a1n1a2n2a3n3...annn⎠⎟⎟⎟⎟⎞
Proof
mj,1=a1,1aj,1M1=⎝⎜⎜⎜⎜⎛1−m21−m31...−mn1010...0001...0...............000...1⎠⎟⎟⎟⎟⎞
Thus, An=Mn−1Mn−2...M1A.
Let U=An, then [M1]−1...[Mn−2]−1[Mn−1]−1U=A[M1]−1=⎝⎜⎜⎜⎜⎛1m21m31...mn1010...0001...0...............000...1⎠⎟⎟⎟⎟⎞L=[M1]−1...[Mn−2]−1[Mn−1]−1
6.4.3 LU Factorization through Gaussian Elimination
The n∗n matrix is said to be strictly diagonally dominant (严格对角占优) when ∣aii∣>j=1,j=i∑n∣aij∣
holds for each i=1,2,3,...,n.
6.5.2 Property
A strictly diagonally dominant matrix A is nonsingular.
Moreover, in this case, Gaussian elimination can be performed on any linear system of the form Ax=b to obtain its unique solution without row or column interchanges, and the computations are stable with respect to the growth of roundoff errors.
Proof for First Property
A matrix is singular means its determinant is zero.
A matrix’s determinant is zero means the n vectors in the matrix are linearly dependent.
Thus, matrix A is singular means there exists a column vector u that Ax=0.
6.6 Positive Definite Symmetric Matrix
6.6.1 Definition
A matrix A is positive definite if it is symmetric and if xTAx>0 for every n-dimensional column vector x=0.
6.6.2 Property
If A is an n∗n positive definite matrix, then
A is nonsingular;
aii>0 for each i=1,2,...,n;
max1≤k,j≤n∣ak,j∣≤max1≤i≤n∣ai,i∣;
aij2<aiiajj for each i=j.
6.6.3 Theorem
6.7 LLT Factorization
6.7.1 Definition
For a n∗n symmetric and positive definite matrix A with the form A=⎝⎜⎜⎜⎜⎛a11a12a13...a1na12a22a23...a2na13a23a33...a3n...............a1na2na3n...ann⎠⎟⎟⎟⎟⎞
where AT=A. We can factorize this matrix to the form like LLT=A, where L is a lower triangular matrix with form as follows ⎝⎜⎜⎜⎜⎜⎛l11l21l31⋮ln10l22l32⋮ln200l33⋮ln3⋯⋯⋯⋱⋯000⋮lnn⎠⎟⎟⎟⎟⎟⎞
Thus, we need to determine the elements lij, for i∈[1,n] and j∈[1,n]. A=⎝⎜⎜⎜⎜⎛a11a12a13...a1na12a22a23...a2na13a23a33...a3n...............a1na2na3n...ann⎠⎟⎟⎟⎟⎞=LLT
6.7.2 Choleski’s Algorithm
Calculate the value one row by one row
To factor the positive definite n∗n matrix A into LLT, where L is lower triangular:
INPUT: the dimension n; entries aij of A, for i∈[1,n] and j∈[1,i].
OUTPUT: the entries lij of L, for i∈[1,n] and j∈[1,i].
Step 1: Set l11=a11.
Step 2: For j∈[2,n], set lj1=l11a1j
Step 3: For i∈[2,n−1], do Steps 4 and 5.
Step 4: Set lii=[aii−∑j=1i−1lij2]21.
Step 5: For j∈[i+1,n], set lji=liiaij−∑k=1i−1likljk
Step 6: Set lnn=[ann−∑k=1n−1lnk2]21
Step 7: OUTPUT lij for j∈[1,i] and i∈[1,n]. STOP!
Example
6.8 LDLT Factorization
6.8.1 Definition
Matrix A is a positive definite matrix, thus A=LDLT.
We can calculate the value of L and D one row by one row.
6.8.2 Algorithm
6.9 Tri-diagonal Linear System
6.9.1 Definition
An n∗n matrix A is called a band matrix (带状矩阵), if integers p and q, with 1<p,q<n, exist having the property that aij=0 whenever i+p≤j or j+q≤i. The bandwidth (带宽) of a band matrix is defined as w=p+q−1.
6.9.2 LU Factorization
A=LU
In order to solve the problem of Ax=LUx=b, there are two steps to do.
z=Ux, and solve Lz=b
solve Ux=z
6.9.3 Remarks
Band matrices usually are sparse matrices, thus we need to substitute two-dimensional array to one-dimensional array to store the value of the matrices.
Banded matrices appear in numerical calculation methods of partial differential equations and are common matrix forms.