Systems with diagonal dominance. Diagonal dominance Diagonal dominance condition

Definition.

A system is called a system with diagonal dominance in a row if the elements of the matrixsatisfy the inequalities:

,

Inequalities mean that in each row of the matrix the diagonal element is highlighted: its modulus is greater than the sum of the moduli of all other elements of the same row.

Theorem

A system with diagonal dominance is always solvable and, moreover, in a unique way.

Consider the corresponding homogeneous system:

,

Suppose it has a nontrivial solution Let the component of this solution with the largest modulus correspond to the index
, i.e.

,
,
.

Let's write down -th equation of the system in the form

and take the modulus of both sides of this equality. As a result, we get:

.

Reducing inequality by a factor
, which, according to, is not equal to zero, we come to a contradiction with the inequality expressing diagonal dominance. The resulting contradiction allows us to consistently state three statements:

The last of them means that the proof of the theorem is complete.

      1. Systems with a tridiagonal matrix. Sweep method.

When solving many problems, one has to deal with systems of linear equations of the form:

,
,

,
,

where the coefficients
, right side
known together with numbers and ... Additional relationships are often referred to as the boundary conditions for the system. In many cases, they can be more complex. For instance:

;
,

where
- given numbers. However, in order not to complicate the presentation, we restrict ourselves to the simplest form of additional conditions.

Taking advantage of the fact that the values and are given, we will rewrite the system in the form:

The matrix of this system has a three-diagonal structure:

This greatly simplifies the solution of the system thanks to a special method called the sweep method.

The method is based on the assumption that the unknown unknowns and
related by the recurrence relation

,
.

Here the quantities
,
, called the sweep coefficients, are to be determined based on the conditions of the problem,. In fact, such a procedure means replacing the direct definition of unknowns the task of determining the running coefficients with the subsequent calculation of the values .

To implement the described program, we express using the relation
across
:

and substitute
and expressed in terms of
, into the original equations. As a result, we get:

.

The latter relations will certainly be fulfilled and, moreover, regardless of the solution, if we require that for
the equalities took place:

Hence the recurrence relations for the sweep coefficients follow:

,
,
.

Left boundary condition
and the ratio
are consistent if we put

.

The remaining values ​​of the sweep coefficients
and
we find from, which completes the stage of calculating the running coefficients.

.

The rest of the unknowns can be found from here.
in the process of running back using a recursive formula.

The number of operations required to solve a general system by the Gaussian method increases with increasing proportionately ... The sweep method is reduced to two cycles: first, the sweep coefficients are calculated using the formulas, then with their help, the components of the solution of the system are found using recurrent formulas ... This means that with an increase in the size of the system, the number of arithmetic operations will grow proportionally , but not ... Thus, the sweep method within the scope of its possible application is significantly more economical. To this should be added the special simplicity of its software implementation on a computer.

In many applied problems that lead to a SLAE with a tridiagonal matrix, its coefficients satisfy the inequalities:

,

which express the property of diagonal dominance. In particular, we will find such systems in the third and fifth chapters.

According to the theorem of the previous section, the solution of such systems always exists and is unique. They also have a statement that is important for actually calculating the solution using the sweep method.

Lemma

If the diagonal dominance condition is satisfied for a system with a tridiagonal matrix, then the sweep coefficients satisfy the inequalities:

.

We carry out the proof by induction. According to
, i.e., at
the lemma is true. Let us now assume that it is true for and consider
:

.

So, induction from To
substantiated, which completes the proof of the lemma.

Inequality for sweep coefficients makes the run stable. Indeed, suppose that the component of the solution as a result of the rounding procedure, it was calculated with some error. Then, when calculating the next component
according to the recursive formula, this error, due to inequality, will not increase.

NONDEGENERITY OF MATRICES AND THE PROPERTY OF DIAGONAL DOMINATION1

© 2013 L. Tsvetkovich, V. Kostich, L. A. Krukier

Cvetkovic Liliana - Professor, Department of Mathematics and Informatics, Faculty of Science, University of Novi Sad, Serbia, Obradovic 4, Novi Sad, Serbia, 21000, e-mail: [email protected]

Kostic Vladimir - Assistant Professor, Doctor, Department of Mathematics and Informatics, Faculty of Science, University of Novi Sad, Serbia, Obradovic 4, 21000, Novi Sad, Serbia, email: [email protected]

Krukier Lev Abramovich - Doctor of Physical and Mathematical Sciences, Professor, Head of the Department of High Performance Computing and Information and Communication Technologies, Director of the South Russian Regional Center for Informatization of the Southern Federal University, 200/1 Stachki Ave., bldg. 2, Rostov-on-Don, 344090, e-mail: [email protected] ru.

Cvetkovic Ljiljana - Professor, Department of Mathematics and Informatics, Faculty of Science, University of Novi Sad, Serbia, D. Obradovica 4, Novi Sad, Serbia, 21000, e-mail: [email protected]

Kostic Vladimir - Assistant Professor, Department of Mathematics and Informatics, Faculty of Science, University of Novi Sad, Serbia, D. Obradovica 4, Novi Sad, Serbia, 21000, e-mail: [email protected]

Krukier Lev Abramovich - Doctor of Physical and Mathematical Science, Professor, Head of the Department of High Performance Computing and Information and Communication Technologies, Director of the Computer Center of the Southern Federal University, Stachki Ave, 200/1, bild. 2, Rostov-on-Don, Russia, 344090, e-mail: [email protected] ru.

Diagonal dominance in a matrix is ​​a simple condition for its non-degeneracy. Matrix properties that generalize the concept of diagonal dominance are always in great demand. They are viewed as conditions such as diagonal dominance and help to define subclasses of matrices (such as H-matrices), which under these conditions remain non-degenerate. In this paper, new classes of non-degenerate matrices are constructed that retain the advantages of diagonal dominance, but remain outside the class of H-matrices. These properties are especially convenient because many applications lead to matrices of this class, and the theory of non-degeneracy of matrices that are not H-matrices can now be extended.

Key words: diagonal dominance, non-degeneracy, scaling.

While simple conditions that ensure nonsingularity of matrices are always very welcomed, many of which that can be considered as a type of diagonal dominance tend to produce subclasses of a well known H-matrices. In this paper we construct a new classes of nonsingular matrices which keep the usefulness of diagonal dominance, but stand in a general relationship with the class of H-matrices. This property is especially favorable, since many applications that arise from H-matrix theory can now be extended.

Keywords: diagonal dominance, nonsingularity, scaling technique.

Numerical solution of boundary value problems of mathematical physics reduces, as a rule, the original problem to the solution of a system of linear algebraic equations. When choosing a solution algorithm, we need to know if the original matrix is ​​non-degenerate? In addition, the question of the nondegeneracy of the matrix is ​​relevant, for example, in the theory of convergence of iterative methods, localization of eigenvalues, when evaluating determinants, Apron roots, spectral radius, singular values ​​of a matrix, etc.

Note that one of the simplest, but extremely useful conditions for ensuring that a matrix is ​​nondegenerate is the well-known strict diagonal dominance property (and references therein).

Theorem 1. Let a matrix A = e Cnxn be given such that

s> z (a): = S k l, (1)

for all i ∈ N: = (1,2, ... n).

Then the matrix A is non-degenerate.

Matrices with property (1) are called matrices with strict diagonal dominance

(8BB matrices). Their natural generalization is the class of generalized diagonal dominance (GDB) matrices, defined as follows:

Definition 1. A matrix A = [a ^] e Cnxn is called an BB-matrix if there exists a non-degenerate diagonal matrix W such that AW is an 8BB-matrix.

Let us introduce several definitions for the matrix

A = [ay] e Cnxn.

Definition 2. Matrix (A) = [knock], defined

(A) = e Cn

is called the comparison matrix of the matrix A.

Definition 3. Matrix A = e C

\ üj> 0, i = j

is an M-matrix if

aj< 0, i * j,

inverse mat-

the matrix A "> 0, that is, all its elements are positive.

Obviously, matrices from the VBB class are also nondegenerate matrices and can be

1This work was partially supported by the Ministry of Education and Science of Serbia, grant 174019, and the Ministry of Science and Technological Development of Vojvodina, grants 2675 and 01850.

found in the literature under the name of non-degenerate H-matrices. They can be determined using the following necessary and sufficient condition:

Theorem 2. The matrix A = [ay] e and is H-

matrix if and only if its comparison matrix is ​​a nondegenerate M-matrix.

By now, many subclasses of non-degenerate H-matrices have already been studied, but all of them are considered from the point of view of generalizations of the strictly diagonal dominance property (see and references therein).

In this paper, we consider the possibility of going beyond the class of H-matrices by generalizing the 8BB-class in a different way. The basic idea is to keep using the scaling approach, but with matrices that are not diagonal.

Consider the matrix A = [ay] e cnxn and the index

We introduce the matrix

r (A): = £ a R (A): = £

ßk (A): = £ and yk (A): = aü - ^

It is easy to check that the elements of the matrix bk Abk have the following form:

ßk (A), Y k (A), akj,

i = j = k, i = j * k,

i = k, j * k, i * k, j = k,

A inöaeüiüö neö ^ äyö.

If we apply Theorem 1 to the matrix bk Abk1 described above and its transposed, then we obtain two main theorems.

Theorem 3. Let any matrix be given

A = [ay] e cnxn with nonzero diagonal elements. If there exists k ∈ N such that> Γk (A), and for each r ∈ N \ (k),

then the matrix A is nondegenerate.

Theorem 4. Let any matrix be given

A = [ay] e cnxn with nonzero diagonal elements. If there exists k e N such that> Hk (A), and for each r e N \ (k),

Then the matrix A is non-degenerate. A natural question arises about the relationship between

matrices from the previous two theorems: b ^ - BOO-matrices (defined by formula (5)) and

Bk - BOO-matrices (defined by formula (6)) and the class of H-matrices. The following simple example makes this clear.

Example. Consider the following 4 matrices:

and consider the matrix bk Abk, k ∈ N, similar to the original A. Let us find the conditions when this matrix will have the property of an SDD matrix (by rows or by columns).

Throughout the article for r, k eN: = (1,2, ... /?) We will use the notation:

2 2 1 1 3 -1 1 1 1

" 2 11 -1 2 1 1 2 3

2 1 1 1 2 -1 1 1 5

Non-degeneracy theorems

They are all non-degenerate:

A1 is b - BOO, despite the fact that it is not bk - BOO for any k = (1,2,3). It is also not an H-matrix, since (A ^ 1 is not non-negative;

Due to symmetry, A2 is simultaneously LR - BOO and L<2 - БОО, так же как ЬЯ - БОО и

B<3 - БОО, но не является Н-матрицей, так как (А2) вырожденная;

A3 is b9 - BOO, but is neither

Lr - SDD (for k = (1,2,3)), nor an H-matrix, since (A3 ^ is also degenerate;

A4 is an H-matrix since (A ^ is nondegenerate and ^ A4) 1> 0, although it is neither LR - SDD nor Lk - SDD for any k = (1,2,3).

The figure shows the general relationship between

Lr - SDD, Lk - SDD and H-matrices together with matrices from the previous example.

The relationship between lR - SDD, lC - SDD and

hell min (| au - r (A) |) "

Starting with inequality

and applying this result to the matrix bk Ab ^, we obtain

Theorem 5. Let an arbitrary matrix A = [a--] e Cnxn with nonzero diagonal elements be given.

cops. If A belongs to the class - BOO, then

1 + max ^ i * k \ acc \

H-matrices

It is interesting to note that although we received

the class bCk BOO -matrices by applying Theorem 1 to the matrix obtained by transposing the matrix bk Ab ^ 1, this class does not coincide with the class obtained by applying Theorem 2 to the matrix Am.

Let us introduce definitions.

Definition 4. A matrix A is called (Lk-BOO by rows) if AT (Lk-BOO).

Definition 5. A matrix A is called (bCk-BOO by rows) if AT (bCk-BOO).

Examples show that classes U - BOO,

BC-BOO, (bk - BOO line-by-line) and (L ^ -BOO line-by-line) are related to each other. Thus, we have extended the class of H-matrices in four different ways.

Application of new theorems

Let us illustrate the usefulness of the new results in estimating the C-norm of an inverse matrix.

For an arbitrary matrix A with strict diagonal dominance, the well-known Warakh theorem (WaraH) gives the estimate

min [| pf (A) | - тк (A), min (| yk (A) | - qk (A) - | af (A) |)] "i i (фf ii ii

In a similar way, we obtain the following result for Lk - SDD matrices over columns.

Theorem 6. Let an arbitrary matrix A = e and with nonzero diagonal entries be given. If A belongs to the class bk-SDD in columns, then

Ik-lll<_ie#|akk|_

"" mln [| pf (A) | - Rf (AT), mln (| уk (A) | - qk (AT) - | aft |)] "

The importance of this result lies in the fact that for many subclasses of non-degenerate H-matrices there are restrictions of this type, but for those non-degenerate matrices that are not H-matrices, this is a nontrivial problem. Consequently, restrictions of the same kind as in the previous theorem are in great demand.

Literature

Levy L. Sur le possibilité du l "equlibre electrique C. R. Acad. Paris, 1881. Vol. 93. P. 706-708.

Horn R.A., Johnson C.R. Matrix Analysis. Cambridge, 1994. Varga R.S. Gersgorin and His Circles // Springer Series in Computational Mathematics. 2004. Vol. 36.226 p. Berman A., Plemons R. J. Nonnegative Matrices in the Mathematical Sciences. SIAM Series Classics in Applied Mathematics. 1994. Vol. 9.340 p.

Cvetkovic Lj. H-matrix theory vs. eigenvalue localization // Numer. Algor. 2006. Vol. 42. P. 229-245. Cvetkovic Lj., Kostic V., Kovacevic M., Szulc T. Further results on H-matrices and their Schur complements // Appl. Math. Comput. 1982. P. 506-510.

Varah J.M. A lower bound for the smallest value of a matrix // Linear Algebra Appl. 1975. Vol. 11.P. 3-5.

Received by the editors

A_ (nn) has the property diagonal dominance, if

| a_ (ii) | \ geqslant \ sum_ (j \ neq i) | a_ (ij) |, \ qquad i = 1, \ dots, n,

moreover, at least one inequality is strict. If all inequalities are strict, then the matrix is ​​said to be A_ (nn) possesses strict diagonal predominance.

Diagonal dominant matrices appear quite frequently in applications. Their main advantage is that iterative methods for solving SLAEs with such a matrix (simple iteration method, Seidel's method) converge to an exact solution that exists and is unique for any right-hand sides.

Properties

  • A strictly diagonal dominance matrix is ​​nondegenerate.

see also

Write a review on the article "Diagonal Dominance"

Excerpt from Diagonal Dominance

The Hussar Pavlograd regiment was stationed two miles from Braunau. The squadron, in which Nikolai Rostov served as a cadet, was located in the German village of Salzenek. The squadron commander, captain Denisov, known to the entire cavalry division under the name Vaska Denisov, was given the best apartment in the village. Junker Rostov, ever since he overtook the regiment in Poland, lived with the squadron commander.
On October 11, the very day when everything in the main apartment was raised to its feet by the news of Mack's defeat, at the headquarters of the squadron, marching life was quietly going on as before. Denisov, who had lost all night at cards, had not yet come home when Rostov, early in the morning, on horseback, returned from foraging. Rostov, in a cadet's uniform, rode up to the porch, pushing the horse, with a flexible, youthful gesture threw off his leg, stood on the stirrup, as if not wanting to part with the horse, finally jumped down and shouted the messenger.

SAINT PETERSBURG STATE UNIVERSITY

Faculty of Applied Mathematics - Control Processes

A. P. IVANOV

PRACTICE ON NUMERICAL METHODS

SOLUTION OF SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS

Methodical instructions

Saint Petersburg

CHAPTER 1. SUPPORTING INFORMATION

The methodological manual provides a classification of methods for solving SLAEs and algorithms for their application. The methods are presented in a form that allows their use without reference to other sources. It is assumed that the matrix of the system is nonsingular, i.e. det A 6 = 0.

§one. Norms of vectors and matrices

Recall that a linear space Ω of elements x is called normalized if a function k kΩ is introduced in it, which is defined for all elements of the space Ω and satisfies the conditions:

1.kxk Ω ≥ 0, and kxkΩ = 0 x = 0Ω;

2. kλxk Ω = | λ | KxkΩ;

3.kx + yk Ω ≤ kxkΩ + kykΩ.

We will agree in the future to denote vectors with small Latin letters, and we will consider them column vectors, we denote matrices with capital Latin letters, and we will denote scalar quantities with Greek letters (keeping the designations for integers for the letters i, j, k, l, m, n) ...

The most common vector norms include the following:

| xi |;

1.kxk1 =

2.kxk2 = u x2; t

3.kxk∞ = maxi | xi |.

Note that all norms in the space Rn are equivalent, that is, any two norms kxki and kxkj are related by the relations:

αij kxkj ≤ kxki ≤ βij kxkj,

k k ≤ k k ≤ ˜ k k

α˜ ij x i x j β ij x i,

moreover, αij, βij, α˜ij, βij do not depend on x. Moreover, in a finite-dimensional space, any two norms are equivalent.

The space of matrices with the naturally introduced operations of addition and multiplication by a number form a linear space in which the concept of a norm can be introduced in many ways. However, the so-called subordinate norms are most often considered, i.e. norms related to the norms of vectors by the ratios:

Marking the subordinate norms of matrices with the same indices as the corresponding norms of vectors, it can be established that

k k1

| aij |; kAk2

k∞

(AT A);

Here λi (AT A) denotes the eigenvalue of the matrix AT A, where AT is the matrix transposed to A. In addition to the three main properties of the norm noted above, we note two more:

kABk ≤ kAk kBk,

kAxk ≤ kAk kxk,

moreover, in the last inequality, the matrix norm is subordinate to the corresponding vector norm. We agree to use in the future only the norms of matrices that are subject to the norms of vectors. Note that for such norms the following equality holds: if E is the identity matrix, then kEk = 1,.

§2. Diagonal dominant matrices

Definition 2.1. A matrix A with elements (aij) n i, j = 1 is called a matrix with diagonal dominance (of δ values) if the inequalities

| aii | - | aij | ≥ δ> 0, i = 1, n.

§3. Positive definite matrices

Definition 3.1. A symmetric matrix A will be called a

positive definite if the quadratic form xT Ax with this matrix takes only positive values ​​for any vector x 6 = 0.

The criterion for the positive definiteness of a matrix can be the positiveness of its eigenvalues ​​or the positiveness of its principal minors.

§4. Condition number SLAE

When solving any problem, as you know, there are three types of errors: fatal error, methodical error and rounding error. Let us consider the influence of the unremovable error of the initial data on the solution of the SLAE, neglecting the rounding error and taking into account the absence of a methodological error.

the matrix A is known exactly, and the right-hand side b contains the fatal error δb.

Then for the relative error of the solution kδxk / kxk

it is easy to get an estimate:

where ν (A) = kAkkA − 1 k.

The number ν (A) is called the condition number of system (4.1) (or matrix A). It turns out that always ν (A) ≥ 1 for any matrix A. Since the value of the condition number depends on the choice of the matrix norm, when choosing a specific norm, we will respectively index and ν (A): ν1 (A), ν2 (A) or ν ∞ (A).

In the case ν (A) 1, system (4.1) or matrix A is called ill-conditioned. In this case, as follows from the estimate

(4.2), the error in the solution of system (4.1) may turn out to be unacceptably large. The concept of acceptability or unacceptability of an error is determined by the statement of the problem.

For a matrix with diagonal dominance, it is easy to obtain an upper bound for its condition number. Takes place

Theorem 4.1. Let A be a matrix with diagonal dominance of δ> 0. Then it is nonsingular and ν∞ (A) ≤ kAk∞ / δ.

§5. An example of an ill-conditioned system.

Consider SLAE (4.1), in which

−1

− 1 . . .

−1

−1

−1

.. .

−1

This system has a unique solution x = (0, 0,..., 0, 1) T. Let the right-hand side of the system contain the error δb = (0, 0,.., 0, ε), ε> 0. Then

δxn = ε, δxn − 1 = ε, δxn − 2 = 2 ε, δxn − k = 2 k − 1 ε,. ... ... , δx1 = 2 n − 2 ε.

k∞ =

2 n − 2 ε,

k∞

k∞

k k∞

Hence,

ν∞ (A) ≥ kδxk ∞: kδbk ∞ = 2n − 2. kxk ∞ kbk ∞

Since kAk∞ = n, then kA − 1 k∞ ≥ n − 1 2 n − 2, although det (A − 1) = (det A) −1 = 1. Let, for example, n = 102. Then ν (A ) ≥ 2100> 1030. Moreover, even if ε = 10−15, we obtain kδxk∞> 1015. And so not

Definition.

A system is called a system with diagonal dominance in a row if the elements of the matrixsatisfy the inequalities:

,

Inequalities mean that in each row of the matrix the diagonal element is highlighted: its modulus is greater than the sum of the moduli of all other elements of the same row.

Theorem

A system with diagonal dominance is always solvable and, moreover, in a unique way.

Consider the corresponding homogeneous system:

,

Suppose it has a nontrivial solution Let the component of this solution with the largest modulus correspond to the index
, i.e.

,
,
.

Let's write down -th equation of the system in the form

and take the modulus of both sides of this equality. As a result, we get:

.

Reducing inequality by a factor
, which, according to, is not equal to zero, we come to a contradiction with the inequality expressing diagonal dominance. The resulting contradiction allows us to consistently state three statements:

The last of them means that the proof of the theorem is complete.

      1. Systems with a tridiagonal matrix. Sweep method.

When solving many problems, one has to deal with systems of linear equations of the form:

,
,

,
,

where the coefficients
, right side
known together with numbers and ... Additional relationships are often referred to as the boundary conditions for the system. In many cases, they can be more complex. For instance:

;
,

where
- given numbers. However, in order not to complicate the presentation, we restrict ourselves to the simplest form of additional conditions.

Taking advantage of the fact that the values and are given, we will rewrite the system in the form:

The matrix of this system has a three-diagonal structure:

This greatly simplifies the solution of the system thanks to a special method called the sweep method.

The method is based on the assumption that the unknown unknowns and
related by the recurrence relation

,
.

Here the quantities
,
, called the sweep coefficients, are to be determined based on the conditions of the problem,. In fact, such a procedure means replacing the direct definition of unknowns the task of determining the running coefficients with the subsequent calculation of the values .

To implement the described program, we express using the relation
across
:

and substitute
and expressed in terms of
, into the original equations. As a result, we get:

.

The latter relations will certainly be fulfilled and, moreover, regardless of the solution, if we require that for
the equalities took place:

Hence the recurrence relations for the sweep coefficients follow:

,
,
.

Left boundary condition
and the ratio
are consistent if we put

.

The remaining values ​​of the sweep coefficients
and
we find from, which completes the stage of calculating the running coefficients.

.

The rest of the unknowns can be found from here.
in the process of running back using a recursive formula.

The number of operations required to solve a general system by the Gaussian method increases with increasing proportionately ... The sweep method is reduced to two cycles: first, the sweep coefficients are calculated using the formulas, then with their help, the components of the solution of the system are found using recurrent formulas ... This means that with an increase in the size of the system, the number of arithmetic operations will grow proportionally , but not ... Thus, the sweep method within the scope of its possible application is significantly more economical. To this should be added the special simplicity of its software implementation on a computer.

In many applied problems that lead to a SLAE with a tridiagonal matrix, its coefficients satisfy the inequalities:

,

which express the property of diagonal dominance. In particular, we will find such systems in the third and fifth chapters.

According to the theorem of the previous section, the solution of such systems always exists and is unique. They also have a statement that is important for actually calculating the solution using the sweep method.

Lemma

If the diagonal dominance condition is satisfied for a system with a tridiagonal matrix, then the sweep coefficients satisfy the inequalities:

.

We carry out the proof by induction. According to
, i.e., at
the lemma is true. Let us now assume that it is true for and consider
:

.

So, induction from To
substantiated, which completes the proof of the lemma.

Inequality for sweep coefficients makes the run stable. Indeed, suppose that the component of the solution as a result of the rounding procedure, it was calculated with some error. Then, when calculating the next component
according to the recursive formula, this error, due to inequality, will not increase.