- Research
- Open access
- Published:
The iterative methods for solving nonlinear matrix equation
Advances in Difference Equations volume 2013, Article number: 229 (2013)
Abstract
In this paper, we study the matrix equation , where A and B are square matrices, and Q is a positive definite matrix, and propose the iterative methods for finding positive definite solutions of the matrix equation. Also, general convergence results for the basic fixed point iteration for these equations are given. Some numerical examples are presented to show the usefulness of the iterations.
MSC:65F10, 65F30, 65H10, 15A24.
1 Introduction
In this paper, we consider the matrix equation
where A and B are square matrices, Q is a positive definite matrix. It is easy to see that matrix equation (1) can be reduced to
where I is the identity matrix. Trying to solve special linear systems [1] leads to solving nonlinear matrix equations of the above types as follows.
For a linear system with
positive definite, we rewrite , where
Moreover, we decompose to the LU decomposition
A decomposition of exists if and only if X is a positive definite solution of matrix equation (1). Solving the linear system is equivalent to solving two linear systems with a lower and upper block triangular system matrix. To compute the solution of from y, the Woodbury formula [2] can be applied.
The matrix equation has been studied extensively by many authors [3–14]. Several conditions for the existence of positive definite solutions and some iterations to find maximal positive definite solutions for these equations were discussed. Apparently, matrix equation (1) generalizes the matrix equation .
Matrix equation (2) was studied in [15], and based on some conditions, the authors proved that matrix equation (2) has positive definite solutions. They also proposed two iterative methods to find the Hermitian positive definite solutions of matrix equation (2). They did not analyze the convergence rate of proposed algorithms.
In this paper, we propose two algorithms. We will show that Algorithm (7) is more accurate than Algorithm (3) pointed out in [15]. Also, Algorithm (10) needs less operation in comparison with Algorithm (3). The following notations are used throughout the rest of the paper. The notation () means that A is Hermitian positive semidefinite (positive definite). For Hermitian matrices A and B, we write () if (>0). Similarly, by and we denote, respectively, the maximal and the minimal eigenvalues of A. The norm used in this paper is the spectral norm of the matrix A, i.e., .
2 Fixed point theorems
Lemma 1 [8]
If C and P are Hermitian matrices of the same order with , then .
In [15] an algorithm that avoids matrix inversion for every iteration, called inversion-free variant of the basic fixed point iteration, and a theorem to find a Hermitian positive definite solution of matrix equation (2) were proposed as follows.
Algorithm 1 [15]
Let
Theorem 1 [15]
Assume that matrix equation (2) has a positive definite solution. Then Algorithm (3) defines a monotonically decreasing matrix sequence converging to the positive definite matrix X which is a solution of matrix equation (2). Also, the sequence defined in Algorithm (3) defines a monotonically increasing sequence converging to .
Although it is not mentioned in the previous theorem that the sequence converges to the maximal Hermitian positive definite solution of equation (2), during the proof of the theorem in [15], it is obvious. So, , where is the maximal positive definite solution of matrix equation (2) in Theorem 1.
The problem of convergence rate for Algorithm (3) was not considered in [15]. We now establish the following result.
Theorem 2 If matrix equation (2) has a positive definite solution, for Algorithm (1) and any , we have
and
for all n large enough.
Proof From Algorithm (3), we have
Thus
Now, since and , inequality (4) follows. Also, inequality (5) is true since
This completes the proof. □
The above proof shows that Algorithm (3) should be modified as follows to improve the preceding convergence properties.
Algorithm 2 Let
Theorem 3 Assume that matrix equation (2) has a positive definite solution. Then Algorithm (7) defines a monotonically decreasing matrix sequence converging to which is the maximal Hermitian positive definite solution of equation (2). Also, the sequence defined in Algorithm (7) defines a monotonically increasing sequence converging to .
Proof Let be a positive definite solution of matrix equation (2). It is clear that
is true for . Assume (8) is true for . By Lemma 1, we have that
Therefore,
Since , we have . Thus
and
We have now proved (8) for . Therefore, (8) is true for all n, and and exist. So, we have and . □
Similar to Theorem 2, we can state the following theorem.
Theorem 4 If matrix equation (2) has a positive definite solution for Algorithm (7) and any , then we have
and
for all n large enough.
Now, we can see that Algorithm (7) can be faster than Algorithm (3) from the estimates in Theorems 2 and 4.
Algorithm 3 Take
Algorithm (10) requires only five matrix multiplications per step, whereas Algorithm (3) requires six matrix multiplications per step.
Theorem 5 If matrix equation (2) has a positive definite solution and the two sequences and are determined by Algorithm (10), then is monotone decreasing and converges to the maximal Hermitian positive definite solution . Also, the sequence defined in Algorithm (10) is a monotonically increasing sequence converging to .
Proof Let be a positive definite solution of equation (2). We prove that
and
Since is a solution of matrix equation (1), . Also, we have , and so
i.e., .
For the sequence , since , .
Thus inequalities (11) and (12) are true for . Now, assume that inequalities (11) and (12) are true for , i.e.,
and
We show that inequalities (11) and (12) are true for . We have
and
i.e., . Then
and , since . On the other hand,
i.e., .
Then the above inequalities are true for all n, also and exist. By taking limit on Algorithm (10), we have and , where is the maximal positive definite solution of matrix equation (2). □
By Algorithm (10), we have . Then, for small , can be one stopping condition.
Theorem 6 If matrix equation (2) has a positive definite solution and after n iterative steps of Algorithm (10), the inequality implies
Proof Since
This completes the proof. □
Theorem 7 If for every n, then matrix equation (2) has a Hermitian positive definite solution.
Proof Since for every n, the proof of the monotonicity of and noted in Theorem 5 remains valid. Therefore, the sequence is monotone decreasing and bounded from below by the zero matrix. Then exists. We claim that the sequence is bounded above. Suppose that it does not hold. Then, for every , there exists such that . Since each is positive definite for every n, we have
Furthermore, since A or B are nonsingular for every , we have
By [[16], Lemma 1.2],
which is a contradiction. Then the sequence is bounded above and convergent. Suppose that . As and is monotone increasing, . Taking limit in Algorithm (10) implies that
Since , , and hence . Then matrix equation (2) has a positive definite solution. □
Theorem 8 If matrix equation (2) has a positive definite solution and and , then the sequence defined in Algorithm (10) satisfies
and
for all n large enough.
Proof From Algorithm (10), we have
Thus
Therefore, we have
Now, since , inequality (15) follows. Also, inequality (16) is true since
This completes the proof. □
3 Numerical examples
In this section, we present some numerical examples to show the effectiveness of the new inversion-free variant of the basic fixed point iteration methods. Hermitian positive definite solutions of matrix equation (2) for different matrices A and B are computed. We will compare the suggested algorithms, Algorithm (7) and Algorithm (10), by Algorithm (3). All programs were written in MATLAB.
Example 1 Consider equation (2) with
Algorithm (3) needs six iterations to get the solution
Algorithm (7) needs six iterations to get the solution
We can easily see that Algorithm (7) is more accurate than Algorithm (3).
Algorithm (10) needs six iterations to get the solution
Example 2 Consider equation (2) with
Algorithm (3) after 21 iterations gives the solution
Algorithm (7) after 21 iterations gives the solution
Algorithm (10) after 21 iterations gives the solution
We can see that Algorithm (10) needs to find a Hermitian positive definite solution.
References
Buzbee BL, Golub GH, Nielson CW: On direct methods for solving Poisson’s equations. SIAM J. Numer. Anal. 1970, 7: 627-656. 10.1137/0707049
Housholder AS: The Theory of Matrices in Numerical Analysis. Blaisdell, New York; 1964.
Engwerda JC, Ran AC, Rijkeboer AL:Necessary and sufficient conditions for the existence of a positive definite solution of the matrix equation . Linear Algebra Appl. 1993, 186: 255-275.
Engwerda JC:On the existence of a positive definite solution of the matrix equation . Linear Algebra Appl. 1993, 194: 91-108.
Lancaster P, Rodman L: Algebraic Riccati Equations. Oxford Science Publishers, Oxford; 1995.
Zhan X, Xie J:On the matrix equation . Linear Algebra Appl. 1996, 247: 337-345.
Anderson WN, Morley TD, Trapp GE:Positive solutions of the matrix equation . Linear Algebra Appl. 1990, 134: 53-62.
Zhan X: Computing the extremal positive definite solutions of a matrix equation. SIAM J. Sci. Comput. 1996, 17: 1167-1174. 10.1137/S1064827594277041
Shufang X:On the maximal solution for the matrix equation . Acta Sci. Nat. Univ. Pekin. 2000, 36: 29-38.
Guo CH, Lancaster P: Iterative solution of two matrix equations. Math. Comput. 1999, 68: 1589-1603. 10.1090/S0025-5718-99-01122-9
Meini B:Efficient computation of the extreme solutions of and . Math. Comput. 2002, 71: 1189-1204.
El-Sayed SM, Ran ACM: On an iteration method for solving a class of nonlinear matrix equation. SIAM J. Matrix Anal. Appl. 2001, 23: 632-645.
Ivanov IG, Hasanov VI, Uhlig F:Improved methods and starting values to solve the matrix equations iteratively. Math. Comput. 2004, 74: 263-278. 10.1090/S0025-5718-04-01636-9
El-Sayed SM, Al-Dbiban AM:A new inversion free iteration for solving the equation . J. Comput. Appl. Math. 2005, 181: 148-156. 10.1016/j.cam.2004.11.025
Long JH, Hu XY, Zhang L:On the Hermitian positive definite solution of the matrix equation . Bull. Braz. Math. Soc. 2008, 39: 371-386. 10.1007/s00574-008-0011-7
Chen G, Huang X, Yang X: Vector Optimization: Set-Valued and Variational Analysis. Springer, Berlin; 2005.
Acknowledgements
The authors are grateful to the two anonymous reviewers for their valuable comments and suggestions.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors carried out the proof. All authors conceived of the study and participated in its design and coordination. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Vaezzadeh, S., Vaezpour, S.M., Saadati, R. et al. The iterative methods for solving nonlinear matrix equation . Adv Differ Equ 2013, 229 (2013). https://doi.org/10.1186/1687-1847-2013-229
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1687-1847-2013-229