A Note on the Reverse Order Law for g-Inverse of Operator Product

The generalized inverse has many important applications in aspects of the theoretical research of matrices and statistics. One of the core problems of the generalized inverse is finding the necessary and sufficient conditions of the reverse order laws for the generalized inverse of the operator product. In this paper, we study the reverse order law for the g-inverse of an operator product T1T2T3 using the technique of matrix form of bounded linear operators. In particular, some necessary and sufficient conditions for the inclusion T3{1}T2{1}T1{1} ⊆ (T1T2T3){1} is presented. Moreover, some finite dimensional results are extended to infinite dimensional settings.


Introduction
Throughout this paper 'an operator' means 'a bounded linear operator over Hilbert space'. H, I, J and K denote arbitrary Hilbert spaces. L(H, K) denote the set of all bounded linear operators from H to K and L(H) = L(H, H). I denotes the identity operator over Hilbert space and O is the zero operator over Hilbert space. For an operator T ∈ L(H, K), T * , R(T) and N(T) denote the adjoint operator, the range and the null-space of T, respectively.
The concepts of the generalized inverse were shown to be very useful in various applied mathematical settings. For example, applications to singular differential or difference equations, Markov chains, cryptography, iterative method or multibody system dynamics, and so on, which can be found in [2,3,[5][6][7][8]. In the above applied mathematical settings, large-scale scientific computing problems eventually translate to least square problems. Using generalized inverse to give some fast and effective iterative solution algorithms for these least square problems has attracted considerable attention, and many interesting results have been obtained, see [2,3,[9][10][11].
Suppose T i , i = 1, 2, 3, and β are bounded linear operators over Hilbert space, the least squares problem is finding x that minimizes the norm: which is used in many practical scientific problems. Any solution x of the above LS can be expressed as The unique minimal norm least square solution x of the above LS is One of the problems concerning the above LS is under what conditions the reverse order law holds. If Formula (3) is true, then, according to the reverse order law Formula (3) and the iterative algorithm theory, we can naturally construct some ideal iterative sequence, and then design some fast and effective iterative algorithms to solve Formula (2). If Formula (3) is not necessarily true, can we find the necessary and sufficient conditions for Formula (3)? Applying the reverse order law to design some fast and effective iterative algorithms to solve Formula (2), will avoid multiple decompositions of the correlation matrices and keep it in each iteration. The structure of the iterative sequence reduces the amount of machine storage, maintains the convergence, stability of the algorithm, and improves the operation speed, see [2,3,9,[11][12][13].
For the generalized inverses of matrix product, Greville [7] first gave a necessary and sufficient condition for (AB) † ⊆ B † A † . Since then, the problem of the reverse order law for generalized inverses of a matrix product was studied widely. Hartwig [8] derived the necessary and sufficient conditions for the Moore-Penrose inverse of the product of three matrices, and Y. Tian [14] obtained the reverse order law for the Moore-Penrose inverse of the products of multiple matrices. M. Wei [15] and De Pierro [16], respectively, derived necessary and sufficient conditions for the reverse order laws B{1}A{1} ⊆ (AB){1} and B{1, 2}A{1, 2} ⊆ (AB){1, 2}, by applying product singular value decomposition (PSVD). M. Wei [17] then deduced necessary and sufficient conditions for reverse order laws for g-inverse of multiple matrix product. For A n {1, 2, k}A n−1 {1, 2, k} · · · A 1 {1, 2, k} ⊆ (A 1 A 2 · · · A n ){1, 2, k}, k = 3, 4, Xiong and Zheng [18] presented the equivalent conditions using extremal ranks of the generalized Schur complement.
For the generalized inverses of operator product, Bouldin [5] and Izumino [19] extended the results of Greville [7] to the bounded linear operators on Hilbert space, using the gaps between subspaces. Let T 1 ∈ L(H, L) and T 2 ∈ L(K, H), such that the product T 1 T 2 is meaningful, using the technique of matrix form of bounded linear operators, D.S.Djordjević [20] showed that the reverse order law T † 2 T † 1 = (T 1 T 2 ) † holds if, and only if, Kohila et al. [21] obtained the necessary and sufficient conditions for the reverse order law of the Moore-Penrose inverse in rings with involutions. In [22], D.S.Cvetković-IIić et al., studied this reverse order law of the Moore-Penrose inverse in C * -algebra. The reader can find more results of the reverse order law for the Moore-Penrose inverse of operator product in [23][24][25][26][27].
Recently, Xiong and Qin [28,29] studied the reverse order laws for {1, 3}−inverse and {1, 4}−inverse of operator products, using the technique of matrix form of bounded linear operators [30] and some equivalent conditions are derived for these reverse order laws. With the same threads of [28,29], in this paper, we will study the reverse order law for the g-inverse of an operator product T 1 T 2 T 3 . In particular, some necessary and sufficient conditions for the reverse order law is presented. Moreover, some finite dimensional results are extended to infinite dimensional settings.

A Set of Lemmas
As the main tools of our discussion, we first present the following lemmas.

Lemma 1 ([30]
). Suppose T ∈ L(H, K) that have a closed range. Let H 1 and H 2 be closed and mutually orthogonal subspaces of H, such that H 1 H 2 = H, and K 1 , K 2 be closed and mutually orthogonal subspaces of K, such that K = K 1 K 2 . Then the operator T has the following matrix representations with respect to the orthogonal sums of subspaces where T 11 is invertible on R(T * ).

Lemma 2 ([2,3]).
Let T ∈ L(H, K) have a closed range and G ∈ L(K, H). Then the following statements are equivalent:

Main Results
Let T 1 ∈ L(J, K), T 2 ∈ L(I, J) and T 3 ∈ L(H, I) where T 1 , T 2 , T 3 and T 1 T 2 T 3 are regular operators. From Lemma 1, we know that the operators T 1 , T 2 and T 3 have the following matrix forms with respect to the orthogonal sum of subspaces: where D = T 11 1 (T 11 1 ) * + T 12 1 (T 12 1 ) * is invertible on R(T 1 ).
From Lemma 1, we know that the operator T 1 T 2 T 3 has the following matrix forms with respect to the orthogonal sum of subspaces: and Combining (5)-(10) with the results in Lemma 2, it follows that there exist three bounded linear operators W, H, P: where W ∈ L(K, J) and W 11 , W 12 , W 21 , W 22 are arbitrary bounded linear operators on appropriate spaces.