Preprint
Article

C-S and Strongly C-S Orthogonal Matrices

Altmetrics

Downloads

83

Views

13

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

18 December 2023

Posted:

19 December 2023

You are already at the latest version

Alerts
Abstract
In this paper, we present a new concept of the generalized core orthogonality (called the C-S orthogonality) for two generalized core invertible matrices $A$ and $B$. $A$ is said to be C-S orthogonal to $B$ if $A^{\tiny\textcircled{S}}B=0$ and $BA^{\tiny\textcircled{S}}=0$, where $A^{\tiny\textcircled{S}}$ is the generalized core inverse of $A$. The characterizations of C-S orthogonal matries and the C-S additivity are also provided. And the connection between the C-S orthogonality and C-S partial order has been given using their canonical form. Moreover, the concept of the strongly C-S orthogonality is defined and characterized.
Keywords: 
Subject: Computer Science and Mathematics  -   Algebra and Number Theory

MSC:  15A09

1. Introduction

As we all know, there are two forms of the orthogonality: one-sided or two-sided orthogonality. We call that R ( A ) and R ( B ) are orthogonal if A * B = 0 ; if A B * = 0 , we call that R ( A * ) and R ( B * ) are orthogonal. And we call that R ( A * ) and R ( B ) are orthogonal if A B = 0 . If A B = 0 and B A = 0 , then A and B are orthogonal, denoted as A B . Notice that, when A # exists and A B = 0 , where A # is group inverse of A, we have A # B = A # A A # B = ( A # ) 2 A B = 0 . And it is obvious that A # B = 0 implies A B = 0 . Thus, when A # exists, A B if and only if A # B = 0 and B A # = 0 (i.e. A and B are #-orthogonal, denoted as A # B ). And Hestenes [1] gave the concept of *-orthogonality: let A , B C m × n , if A * B = 0 and B A * = 0 hold, then A is *-orthogonal to B, denoted by A * B . For matrices, Hartwig and Styan [2] gave that if the dagger additivity (i.e. ( A + B ) = A + B , where A is Moore-penrose inverse of A) and the rank additivity (i.e. r k ( A + B ) = r k ( A ) + r k ( B ) ), then A is *-orthogonal to B. Ferreyra and Malik [3] introduced the core and strongly core orthogonal matrices by using the core inverse. It is that let A , B C m × n with Ind ( A ) 1 , where Ind ( A ) is the index of A, if A # B = 0 and B A # = 0 , then A is core orthogonal to B, denoted as A # B . A , B C m × n with Ind ( A ) 1 and Ind ( B ) 1 , are strongly core orthogonal matrices (denoted as A s , # B ) if A # B and B # A . In [3],we can see that A s , # B implies ( A + B ) # = A # + B # (core additivity).
In [4], Liu, Wang and Wang proved that A , B C n × n with Ind ( A ) 1 and Ind ( B ) 1 are strongly core orthogonal if and only if ( A + B ) # = A # + B # and A # B = 0 (or B A # = 0 ) instead of A # B , which is more concise than Theorem 7 . 3 in [3]. And Ferreyra and Malik in [3] have proved that if A is strongly core orthogonal to B, then rk ( A + B ) = rk ( A ) + rk ( B ) and ( A + B ) # = A # + B # . But whether the reverse holds is still an open question. In [4], Liu, Wang and Wang solved the problem completely. Furthermore, they also gave some new equivalent conditions for the strongly core orthogonality, which are related to the minus partial order and some Hermitian matrices.
On the basis of the core orthogonal matrix, Mosić, Dolinar, Kuzma and Marovt [5] extended the concept of the core orthogonality and present the new concept of the core-EP orthogonality. A is said to be core-EP orthogonal to B if A D B = 0 and B A D = 0 , where A D is core-EP inverse of A. A number of characterizations for core-EP orthogonality were proved in [5]. Applying the core-EP orthogonality, the concept and characterizations of the strongly core-EP orthogonality were introduced in [5].
In [7], Wang and Liu introduced the generalized core inverse (called the C-S inverse) and gave some properties and characterizations of the inverse. By the C-S inverse, a binary relation (denoted “ A S B ”) and a partial order (called the C-S partial order and denoted “ A CS B ”) are given.
Motivated by these ideas, we give the concepts of the C-S orthogonality and the strongly C-S orthogonality, and discuss their characterizations in this paper. The connection between the C-S partial order and the C-S orthogonality has been given. Moreover, we get some characterizing properties of the C-S orthogonal matrix when A is EP.

2. Preliminaries

For A , X C n × n , and k is the index of A, we consider the following equations:
(1) A X A = A ;
(2) X A X = X ;
(3) A X * = A X ;
(4) X A * = X A ;
(5) A X = X A ;
(6) X A 2 = A ;
(7) A X 2 = X ;
(8) A 2 X = A ;
(9) A X 2 = X ;
(10) X A k + 1 = A k .
The set of all elements X C n × n which satisfies equations ( i ) , ( j ) , . . . , ( k ) among Eqs (1)-(10) are denoted as A i , j , . . . , k . If there exists
A A 1 , 2 , 3 , 4 ,
then it is called the Moore-Penrose inverse of A and A is unique. It was introduced by Moore [6] and improved by Bjerhammar [8] and Penrose [9]. Furthermore, based on the Moore-Penrose inverse, it is known to us that it is EP if and only if A A = A A . If there exists
A # A 1 , 2 , 5 ,
then it is called the group inverse of A and A # is unique [10]. If there exists
A # A 1 , 2 , 3 , 6 , 7 ,
then A # is called the core inverse of A [11]. And if there exists
A d A 3 , 9 , 10 ,
then A d is called the core-EP inverse of A[12]. Moreover, C d is the set of all core-EP invertible matrices of C n × n .
Based on this, we review the concepts of partial orders [3]. The symbols C n G M and C n E P will stand for the subsets of C n × n consisting of group and EP matrices, respectively.
Definition 1 
([13,14,15,16]). Let A , B C n × n ,
(i) A is below B under the minus partial order (written as A B ) if
A A ( 1 ) = B A ( 1 ) , A ( 1 ) A = A ( 1 ) B .
(ii) A is below B under the star partial order (written as A * B ) if
A A = B A , A A = A B .
(iii) A is below B under the sharp partial order (written as A # B ) if
A A # = B A # , A # A = A # B .
(iv) suppose A , B C n G M , then A is below B under the core partial order (written as A # B ) if
A A # = B A # , A # A = A # B .
Definition 2 
([7]). Let A , B C n × n , and I n d ( A ) = k . Then, the C-S inverse of A is defined as the solution of
X A k + 1 = A k , ( A k X k ) * = A k X k , A X = A k X k ( A X ) ,
and X is denoted as A S .
Lemma 1 
([17]). Let A C d , and A = A 1 + A 2 be the core-EP decomposition of A. Then, there exists a unitary matrix U such that
A 1 = T S 0 0 a n d A 2 = 0 0 0 N ,
where T is non-singular, and N is nilpotent.
Then, the core-EP decomposition of A is
A = T S 0 N .
And by applying Lemma 2 . 1 , Wang and Liu in [7] obtained the following canonical form for the C-S inverse of A:
A S = U T 1 0 0 N U * .

3. The C-S orthgonality and its consequences

Firstly, we give the concept of the C-S orthogonality.
Definition 3. 
Let A , B C n × n and Ind ( A ) = k . If
A S B = 0 , B A S = 0 ,
then A is generalized core orthogonal to B, called A is C-S orthogonal to B and denoted as A S B .
If A , B C n × n , then
A B = 0 R ( B ) N ( A ) .
Remark 1. 
Let A , B C n × n and Ind ( A ) = k . Notice that B ( A A S ) = B A k ( A S ) k ( A A S ) = 0 can be proved, if B A = 0 . Then we have B A S = B A = 0 . And if B A S = 0 , we have B ( A A S ) = B A k ( A S ) k ( A A S ) = B A S A k + 1 ( A S ) k ( A A S ) = 0 , which implies B A = 0 . It is obvious that
B A = 0 B A S = 0 .
Applying Definition 3 . 1 , we can also call that A is generalized core orthogonal to B, if
A S B = 0 , B A = 0 .
Next, we study the range and null space of the matrices which are C-S orthogonal. Firstly, we give some characterizations of the C-S inverse as follows.
Lemma 2. 
Let A C n × n , and Ind ( A ) = k , then ( A S ) k = ( A k ) S .
Proof. 
Let (1) is the core-EP decomposition of A, where T is nonsingular with t : = r k ( T ) = r k ( A k ) and N is nilpotent of index k.Then
A k = U T k S ˜ 0 0 U * ,
where S ˜ = i = 1 k T k i S N i 1 . And by (2), we have
A S = U T 1 0 0 N U * .
Then
( A S ) k = U ( T 1 ) k 0 0 0 U *
and
( A k ) S = U ( T k ) 1 0 0 0 U * .
Since ( T 1 ) k = ( T k ) 1 , we have ( A S ) k = ( A k ) S . □
By (5) and (6), it is easy to get the following lemma.
Lemma 3. 
Let A C n × n , and Ind ( A ) = k , then A k is core invertible. In this case, ( A S ) k = ( A k ) # .
Remark 2. 
The core inverse of a square matrix of the index at most 1 satisfies the following properties[3]:
R ( A # ) = R ( ( A # ) * ) = R ( A ) , N ( A # ) = N ( ( A # ) * ) = N ( A * ) .
When A is a square matrix with I n d ( A ) = k . It has been proved that A k is core invertible in Lemma 3 . 2 , so we have
R ( ( A k ) S ) = R ( ( ( A k ) S ) * ) = R ( A k ) , N ( ( A k ) S ) = N ( ( ( A k ) S ) * ) = N ( ( A k ) * ) .
Theorem 1. 
Let A , B C n × n ,and I n d ( A ) = k , then the following are equivalent:
(i) A k S B ;
(ii) ( A k ) * B = 0 , B A k = 0 ;
(iii) R ( B ) N ( ( A k ) * ) , R ( A k ) N ( B ) ;
(iv) R ( B ) N ( ( A k ) S ) , R ( ( A k ) S ) N ( B ) ;
(v) ( A k ) * B * = 0 , B * A k = 0 ;
(vi) R ( B * ) N ( ( A k ) * ) , R ( A k ) N ( B * ) ;
(vii) R ( B * ) N ( ( A k ) S ) , R ( ( A k ) S ) N ( B * ) .
Proof. 
( i ) ( i i ) From A S B = 0 , we have
A S B = 0 A k ( A S ) k B = 0 B * ( A k ( A S ) k ) * = 0 B * A k ( A S ) k = 0 .
By Lemma 3 . 2 , A k is core invertible, which implies A k ( A S ) k A k = A k . In consequence, we have B * A k = B * A k ( A S ) k A k = 0 . By using B A S = 0 , we get
B A S = 0 B A S A k + 1 = 0 B A k = 0 .
( i i ) ( i i i ) It is evident.
( i i i ) ( i v ) According to Remark 3 . 1 , we get R ( B ) N ( ( A k ) S ) , R ( ( A k ) S ) N ( B ) .
( i v ) ( i ) It is evident.
Applying properties of Transposition of ( i i ) , we verify ( v ) , ( v i ) and ( v i i ) are equivalent. □
In view of ( i ) and ( i i ) in Theorem 3 . 3 , we obtain A k S B * from ( v ) . Using Lemma 4 . 4 in [3], we have that ( i ) ( v i i ) in Theorem 3 . 3 and A k # B are equivalent, i.e. A k S B and A k # B are equivalent. And from Lemma 2 . 1 in [5], it is seen that A k # B is equivalent to A D B and A D B * . As a consequence of the theorem we have the following:
Corollary 1. 
Let A , B C n × n ,and I n d ( A ) = k , then the following are equivalent:
(i) A k S B ;
(ii) A k S B * ;
(iii) A k # B ;
(iv) A D B ;
(v) A D B * .
Lemma 4. 
Let A , B C n × n , and I n d ( A ) = k , I n d ( B ) = l . If A k B l = 0 , then
(i) R ( A k ) R ( B l ) = 0 ;
(ii) R ( ( A k ) * ) R ( ( B l ) * ) = 0 ;
(iii) N ( A k + B l ) = N ( A k ) N ( B l ) ;
(iv) N ( ( A k ) * + ( B l ) * ) = N ( ( A k ) * ) N ( ( B l ) * ) .
Proof. 
(i) By applying (3), we have A k B l = 0 R ( B l ) N ( A k ) . Then, by using the fact that A k has index at most 1, we get
R ( A k ) R ( B l ) R ( A k ) N ( A k ) = 0 .
Moreover, it is obvious that 0 R ( A k ) R ( B l ) .
Then, R ( A k ) R ( B l ) = 0 .
(ii) Let A k B l = 0 , we have ( B l ) * ( A k ) * = 0 . Since ( B l ) * has index at most 1, then we can prove ( i i ) by ( i ) .
(iii) Let X N ( A k + B l ) , then ( A k + B l ) X = 0 , i.e. A k X = B l X . Since
A k X = ( A S ) k A 2 k X = ( A S ) k A k ( B l X ) = ( A S ) k A k B l X = 0 ,
and B l X = 0 , we get X N ( A k ) N ( B l ) , which implies N ( A k + B l ) N ( A k ) N ( B l ) .
On the other hand, it is obvious that N ( A k ) N ( B l ) N ( A k + B l ) . Then, N ( A k + B l ) = N ( A k ) N ( B l ) .
(iv) Let A k B l = 0 , we have ( B l ) * ( A k ) * = 0 . By ( i i i ) , it is easy to check that ( i v ) is ture. □
Theorem 2. 
Let A , B C n × n , and I n d ( A ) = k , I n d ( B ) = l . If A S B , then
(i) R ( A k ) R ( B l ) = 0 ;
(ii) R ( ( A k ) * ) R ( ( B l ) * ) = 0 ;
(iii) N ( A k + B l ) = N ( A k ) N ( B l ) ;
(iv) N ( ( A k ) * + ( B l ) * ) = N ( ( A k ) * ) N ( ( B l ) * ) ;
(v) R ( ( A k ) * ) R ( B l ) = 0 ;
(vi) R ( A k ) R ( ( B l ) * ) = 0 ;
(vii) N ( ( A k ) * + B l ) = N ( ( A k ) * ) N ( B l ) ;
(viii) N ( A k + ( B l ) * ) = N ( A k ) N ( ( B l ) * ) .
Proof. 
By applying A S B , i.e. A S B = 0 and B A S = 0 , we obtain that
( A k ) * B = ( B * A k ) * = ( B * A k ( A S ) k A k ) * = ( A k ) * A k ( A S ) k B = 0
and
B A k = B A S A k + 1 = 0 .
It is obvious that ( A k ) * B l = 0 and B l A k = 0 . In consequence, it is reasonable to obtain that the statements (i)-(viii) is true by Lemma 3 . 5 . □
Using the core-EP decomposition, we obtain the following characterization of C-S orthogonal matrices.
Theorem 3. 
Let A , B C n × n , and I n d ( A ) = k , then the following are equivalent:
(i) A S B ;
(ii) There exist nonsingular matrices T 1 , T 2 , nilpotent matrices 0 N 2 0 N 4 , N 5 and a unitary matrix U such that
A = U T 1 S 1 R 1 0 0 N 2 0 0 N 4 U * , B = U 0 0 0 0 T 2 S 2 0 0 N 5 U * ,
where N 2 N 5 = T 2 N 2 + S 2 N 4 = 0 and N 4 N 5 .
Proof. 
( i ) ( i i ) Let the core-EP decomposition of A is
A = U T S 0 N U * ,
where T is nonsingular and N is nilpotent. Then the decomposition of A S is (2). And write
B = U B 1 B 2 B 3 B 4 U * .
Since
A S B = U T 1 0 0 N B 1 B 2 B 3 B 4 U * = U T 1 B 1 T 1 B 2 N B 3 N B 4 U * = 0 ,
it implies that T 1 B 1 = 0 and T 1 B 2 = 0 , that is, B 1 = B 2 = 0 .
Since
B A S = U 0 0 B 3 B 4 T 1 0 0 N U * = U 0 0 B 3 T 1 B 4 N U * = 0 ,
it implies that B 3 T 1 = 0 , and we have B 3 = 0 . Therefore,
B = U 0 0 0 B 4 U * ,
where N B 4 = B 4 N = 0 , i.e. B 4 N .
Now, let
B 4 = U 2 T 2 S 2 0 N 5 U 2 *
be the core EP decomposition of B 4 and U = U 1 I 0 0 U 2 . Partition N according to the partition of B 4 , then
N = U 2 N 1 N 2 N 3 N 4 U 2 * .
By applying B 4 N , we get
N B 4 = U 2 N 1 N 2 N 3 N 4 T 2 S 2 0 N 5 U 2 * = U N 1 T 2 N 1 S 2 + N 2 N 5 N 3 T 2 N 3 S 2 + N 4 N 5 U 2 * = 0 ,
which leads to N 1 T 2 = N 3 T 2 = 0 . Thus, N 1 = N 3 = 0 and N 2 N 5 = N 4 N 5 = 0 . And
B 4 N = U 2 T 2 S 2 0 N 5 0 N 2 0 N 4 U 2 * = U 0 T 2 N 2 + S 2 N 4 0 N 5 N 4 U 2 * = 0 ,
which implies that T 2 N 2 + S 2 N 4 = 0 and N 5 N 4 = 0 . Then,
A = U T 1 S 1 R 1 0 0 N 2 0 0 N 4 U * , B = U 0 0 0 0 T 2 S 2 0 0 N 5 U * ,
where N 2 N 5 = T 2 N 2 + S 2 N 4 = 0 and N 4 N 5 .
( i i ) ( i ) Let
A S = U T 1 1 0 0 0 0 N 2 0 0 N 4 U * .
Using N 2 N 5 = T 2 N 2 + S 2 N 4 = 0 and N 4 N 5 , we can get
A S B = U T 1 1 0 0 0 0 N 2 0 0 N 4 0 0 0 0 T 2 S 2 0 0 N 5 U * = U 0 0 0 0 0 N 2 N 5 0 0 N 4 N 5 U * = 0
and
B A S = U 0 0 0 0 T 2 S 2 0 0 N 5 T 1 1 0 0 0 0 N 2 0 0 N 4 U * = U 0 0 0 0 0 T 2 N 2 + S 2 N 4 0 0 N 5 N 4 U * = 0 .
Thus, A S B . □
Next, based on the C-S partial order, we get some relation between the C-S orthogonality and the C-S partial order.
Lemma 5 
([7]). Let A , B C n × n . There is a binary relation such that:
A S B : A ( A S ) * = B ( A S ) * , ( A S ) * A = ( A S ) * B .
In this case, there exists a unitary matrix U such that
A = U T S 0 N U * , B = U T S 0 B 4 U * ,
where T is invertible, N is nilpotent, and N * B 4 .
Lemma 6 
([7]). Let A , B C n × n . The partial order on " CS " is defined as
A CS B : A S B , B A * A A D = A A * A A D .
We call it C-S partial order.
Theorem 4. 
Let A , B C n × n ,and I n d ( A ) = k , then the following are equivalent:
(i) A S B , B * A * A A D = 0 ;
(ii) A CS A + B * .
Proof. 
( i ) ( i i ) Let A S B , i.e. A S B = 0 and B A S = 0 . Then, B * ( A S ) * = 0 and ( A S ) * B * = 0 .
Since
( A S ) * ( A + B * ) ( A S ) * A = ( A S ) * B * = 0
and
( A + B * ) ( A S ) * A ( A S ) * = B * ( A S ) * = 0 ,
then we have A ( A S ) * = B ( A S ) * and ( A S ) * A = ( A S ) * B , which implies A S A + B * .
By applying B * A * A A D = 0 , we have ( A + B * ) A * A A D = A A * A A D = 0 .
Then A CS A + B * is established.
( i i ) ( i ) Let A CS A + B * , i.e. ( A S ) * ( A + B * ) = ( A S ) * A and ( A + B * ) ( A S ) * = A ( A S ) * . It is clear that A S B = 0 and B A S = 0 . It follows that A S B . □
When A is an E P matrix, we have a more refined result which reduces to the well-known characterizations of the orthogonality in the usual sense.
Theorem 5. 
Let A C n E P , then the following are equivalent:
(i) A S B ;
(ii) A # B ;
(iii) A * B ;
(iv) A B ;
(v) There exist nonsingular matrices T 1 , T 2 , a nilpotent matrix N and a unitary matrix U such that
A = U T 1 0 0 0 0 0 0 0 0 U * , B = U 0 0 0 0 T 2 S 0 0 N U * .
Proof. 
Since A C n E P , the decomposition of A and A S are
A = U T 1 0 0 0 0 0 0 0 0 U * , A S = U T 1 1 0 0 0 0 0 0 0 0 U * ,
where T 1 is nonsingular and U is unitary. Then A S = A # . It is clear that A S B is equivalent to A # B . It follows from Corollary 4 . 8 in [3] that ( i ) ( v ) are equivalent. □

4. The strongly C-S orthgonality and its consequences

The concept of the strongly C-S orthogonality is considered in this section as a relation which is symmetric but unlike the C-S orthogonality.
Definition 4. 
Let A , B C n × n , and I n d ( A ) = I n d ( B ) = k . If
A S B , B S A ,
then A and B are said to be strongly C-S orthogonal, denoted as
A s , S B .
Remark 3. 
Applying Remark 3 . 1 , we have that A S B is equivalent to A S B = 0 , B A = 0 . Since A S B = 0 and A S B S = 0 are equivalent, it is interesting to observe A S B A S B S = 0 , B A = 0 . Then A s , S B is equivalent to A S B S = B S A S = 0 , B A = A B = 0 . Therefore, the concept of the strongly C-S orthogonality can be defined by another conditions, that is,
A s , S B A S B S , A B A S B S , A B B S A S , A B .
Theorem 6. 
Let A , B C n × n , and I n d ( A ) = I n d ( B ) = k . Then, the following statements are equivalent.
(i) A s , S B ;
(ii) There exist nonsingular matrices T 1 , T 2 , nilpotent matrices N 4 , N 5 and a unitary matrix U such that
A = U T 1 0 R 1 0 0 0 0 0 N 4 U * , B = U 0 0 0 0 T 2 S 2 0 0 N 5 U * ,
where R 1 N 5 = S 2 N 4 = 0 and N 4 N 5 .
Proof. 
( i ) ( i i ) Let A s , S B , i.e. A S B and B S A . From Theorem 3 . 7 , the core-EP decompositions of A and B are (7), respectively. And
B S = U 0 0 0 0 T 2 1 0 0 0 N 5 U * .
Since
B S A = U 0 0 0 0 T 2 1 0 0 0 N 5 T 1 S 1 R 1 0 0 N 2 0 0 N 4 U * = U 0 0 0 0 0 T 2 1 N 2 0 0 0 U * = 0 ,
it implies T 2 1 N 2 = 0 , that is, N 2 = 0 . On the other hand,
A B S = U T 1 S 1 R 1 0 0 0 0 0 N 4 0 0 0 0 T 2 1 0 0 0 N 5 U * = U 0 S 1 T 2 1 R 1 N 5 0 0 0 0 0 0 U * = 0 ,
which yields S 1 T 2 1 = R 1 N 5 = 0 , that is, S 1 = R 1 N 5 = 0 . According to the above results, we have
A = U T 1 0 R 1 0 0 0 0 0 N 4 U * , B = U 0 0 0 0 T 2 S 2 0 0 N 5 U * ,
where R 1 N 5 = S 2 N 4 = 0 and N 4 N 5 .
( i i ) ( i ) Let
A S = U T 1 1 0 0 0 0 0 0 0 N 4 U * , B S = U 0 0 0 0 T 2 1 0 0 0 N 5 U * .
It follows from R 1 N 5 = S 2 N 4 = 0 and N 4 N 5 that
A S B = U T 1 1 0 0 0 0 0 0 0 N 4 0 0 0 0 T 2 S 2 0 0 N 5 U * = U 0 0 0 0 0 0 0 0 N 4 N 5 U * = 0 ,
B A S = U 0 0 0 0 T 2 S 2 0 0 N 5 T 1 1 0 0 0 0 0 0 0 N 4 U * = U 0 0 0 0 0 S 2 N 4 0 0 N 5 N 4 U * = 0 ,
B S A = U 0 0 0 0 T 2 1 0 0 0 N 5 T 1 0 R 1 0 0 0 0 0 N 4 U * = U 0 0 0 0 0 0 0 0 N 5 N 4 U * = 0
and
A B S = U T 1 0 R 1 0 0 0 0 0 N 4 0 0 0 0 T 2 1 0 0 0 N 5 U * = U 0 0 R 1 N 5 0 0 0 0 0 N 4 N 5 U * = 0 .
Thus, A s , S B . □
Lemma 7. 
Let B C n × n , I n d ( B ) = k and the forms of B and B S are
B = U 0 B 2 0 B 4 U * , B S = U 0 X 2 0 X 4 U *
respectively. Then
X 4 = B 4 S , X 2 B 4 k + 1 = B 2 B 4 k 1 , B 2 B 4 k 1 B 4 k = 0 .
Proof. 
Applying
B S B k + 1 = U 0 X 2 B 4 k + 1 0 X 4 B 4 k + 1 U * = U 0 B 2 B 4 k 1 0 B 4 k U * = B k ,
( B k ( B S ) k ) * = U 0 0 ( B 2 B 4 k 1 X 4 k ) * ( B 4 k X 4 k ) * U * = U 0 B 2 B 4 k 1 X 4 k 0 B 4 k X 4 k U * = B k ( B S ) k
and
B k ( B S ) k ( B B S ) = U 0 0 0 B 4 k X 4 k ( B 4 B 4 S ) U * = U 0 0 0 B 4 B 4 S U * = B B S ,
we get that X 4 B 4 k + 1 = B 4 k , ( B 4 k X 4 k ) * = B 4 k X 4 k and B 4 k X 4 k ( B 4 B 4 S ) = B 4 B 4 S , which lead to X 4 = B 4 S . And X 2 B 4 k + 1 = B 2 B 4 k 1 , B 2 B 4 k 1 B 4 k = 0 .
Theorem 7. 
Let A , B C n × n , I n d ( A ) = I n d ( B ) = k and A B = 0 , then A s , S B if and only if ( A + B ) S = A S + B S and B A S = 0 .
Proof. 
Only if: From Theorem 4 . 1 , we have the forms of A and B are (9). Since N 4 , N 5 are nilpotent matrices with Ind ( A ) = Ind ( B ) = k , we can see that ( N 4 + N 5 ) k + 1 = ( N 4 + N 5 ) k = 0 . It follows that
A + B = U T 1 0 R 1 0 T 2 S 2 0 0 N 4 + N 5 U * ,
and
( A + B ) k = U T 1 k 0 R 1 ˜ 0 T 2 k S 2 ˜ 0 0 0 U * ,
where R 1 ˜ = i = 1 k T 1 i 1 R 1 ( N 4 + N 5 ) k i and S 2 ˜ = i = 1 k T 2 i 1 S 2 ( N 4 + N 5 ) k i .
And it is clear that R 1 ˜ = T 1 k 1 R 1 + T 1 1 R 1 ˜ ( N 4 + N 5 ) and S 2 ˜ = T 1 k 1 S 2 + T 1 1 S 2 ˜ ( N 4 + N 5 ) .
By (10), let
X : = A S + B S = U T 1 1 0 0 0 T 2 1 0 0 0 N 4 + N 5 U * .
Since
X ( A + B ) k + 1 = U T 1 1 0 0 0 T 2 1 0 0 0 N 4 + N 5 T 1 k + 1 0 T 1 k R 1 + R 1 ˜ ( N 4 + N 5 ) 0 T 2 k + 1 T 2 k S 2 + S 2 ˜ ( N 4 + N 5 ) 0 0 0 U * = U T 1 k 0 T 1 k 1 R 1 + T 1 1 R 1 ˜ ( N 4 + N 5 ) 0 T 2 k T 2 k 1 S 2 + T 2 1 S 2 ˜ ( N 4 + N 5 ) 0 0 0 U * = ( A + B ) k ,
( A + B ) k X k = U T 1 k 0 R 1 ˜ 0 T 2 k S 2 ˜ 0 0 0 T 1 k 0 0 0 T 2 k 0 0 0 0 U * = U I r k ( A k ) 0 0 0 I r k ( B k ) 0 0 0 0 U * = ( ( A + B ) k X k ) *
and
( A + B ) k X k ( A + B X ) = U I r k ( A k ) 0 0 0 I r k ( B k ) 0 0 0 0 T 1 T 1 1 0 R 1 0 T 2 T 2 1 S 2 0 0 0 U * = U T 1 T 1 1 0 R 1 0 T 2 T 2 1 S 2 0 0 0 U * = A X ,
we can get that X : = A S + B S = ( A + B ) S .
If: Let the core-EP decomposition of A be as in (1) and the form of A S be as in (6). Partition B according to the partition of A, then the form of B is (8). And write
B S = U X 1 X 2 X 3 X 4 U * .
Applying A B = 0 and B A S = 0 , we have
A B = U T B 1 + S B 3 T B 2 + S B 4 N B 3 N B 4 U * = 0 ,
and
B A S = U B 1 T 1 B 2 N B 3 T 1 B 4 N U * = 0 .
Then, the form of B is
B = U 0 B 2 0 B 4 U * ,
where T B 2 + S B 4 = 0 , B 2 N = 0 and N B 4 .
Let X : = A S + B S = ( A + B ) S , then
A + B = U T 1 S + B 2 0 N + B 4 U * , ( A + B ) S = U T 1 1 + X 1 X 2 X 3 N + X 4 U * .
Applying N B 4 , it is clear that ( B 4 + N ) k = B 4 k + N k = B 4 k . Thus,
( A + B ) k = U T 1 k S + B 2 ˜ 0 B 4 k U * ,
where S + B 2 ˜ = i = 1 k T 1 i 1 ( S + B 2 ) ( B 4 + N ) k i .
Then
X ( A + B ) k + 1 = U T 1 1 + X 1 X 2 X 3 N + X 4 T 1 k + 1 Y 0 B 4 k + 1 U * = U T 1 k + X 1 T 1 k + 1 ( T 1 1 + X 1 ) Y + X 2 B 4 k + 1 X 3 T 1 k + 1 B 4 k U * = ( A + B ) k ,
where Y = T 1 k ( S + B 2 ) + S + B 2 ˜ ( B 4 + N ) and ( T 1 1 + X 1 ) Y + X 2 B 4 k + 1 = S + B 2 ˜ . Then, we get T 1 k + X 1 T 1 k + 1 = T 1 k and X 3 T 1 k + 1 = 0 , which imply that X 1 = X 3 = 0 . It follows from Lemma 4 . 2 that
B S = U 0 X 2 0 B 4 S U *
and
B 2 B 4 2 k 1 = 0 .
Therefore, we get
X k = U T 1 k X ˜ 2 0 ( B 4 S + N ) k U * ,
where X ˜ 2 = i = 1 k T 1 1 i X 2 ( B 4 S + N ) k i and T 1 k 1 ( S + B 2 ) + T 1 1 S + B 2 ˜ ( B 4 + N ) + X 2 B 4 k + 1 = S + B 2 ˜ . According to T 1 k 1 ( S + B 2 ) + T 1 1 S + B 2 ˜ ( B 4 + N ) = T 1 1 ( S + B 2 ) B 4 k + S + B 2 ˜ , we have that
T 1 1 ( S + B 2 ) B 4 k = X 2 B 4 k + 1 .
In addition,
( A + B ) k X k = U T 1 k S + B 2 ˜ 0 B 4 k T 1 k X ˜ 2 0 ( B 4 S + N ) k U * = U I r k ( A k ) T 1 k X ˜ 2 + S + B 2 ˜ ( B 4 S + N ) k 0 B 4 k ( B 4 S + N ) k U * = ( ( A + B ) k X k ) * ,
which implies that
T 1 k X ˜ 2 + S + B 2 ˜ ( B 4 S + N ) k = 0
and ( B 4 k ( B 4 S + N ) k ) * = B 4 k ( B 4 S + N ) k . Then we have
B 4 k ( B 4 S + N ) k = U 2 T 2 k S 2 ˜ 0 0 T 2 k N 2 ˜ 0 ( N 4 + N 5 ) k U 2 * = U 2 I r k ( B 4 k ) T 2 k N 2 ˜ + S 2 ˜ ( N 4 + N 5 ) k 0 0 U 2 * = ( B 4 k ( B 4 S + N ) k ) * ,
which implies T 2 k N 2 ˜ + S 2 ˜ ( N 4 + N 5 ) k = 0 .
By N 4 N 5 and N 4 k = N 5 k = 0 , it is clear that ( N 4 + N 5 ) k = 0 . Then it is obvious that T 2 k N 2 ˜ = 0 , i.e. N 2 ˜ = i = 1 k T 1 1 i N 2 ( N 4 + N 5 ) k i = 0 . Using N B 4 , we have N 2 N 5 = 0 . Thus, there is N 2 ˜ = i = 1 k T 1 1 i N 2 N 4 k i = 0 . It follows from N k = 0 and N 2 ˜ N 4 k 1 = 0 that T 1 1 k N 2 N 4 k 1 = 0 , that is N 2 N 4 k 1 = 0 . And it implies that N 2 ˜ N 4 k 2 = T 1 1 k N 2 N 4 k 2 = 0 . It is clear that N 2 N 4 k 2 = 0 . Therefore, it follows that N 2 ˜ N 4 k 3 = N 2 ˜ N 4 k 4 = = N 2 ˜ N 4 = 0 , which leads that N 2 N 4 k 2 = N 2 N 4 k 3 = = N 2 N 4 = N 2 = 0 .
Applying (13) and (14), we have
( T 1 k X ˜ 2 + S + B 2 ˜ ( B 4 S ) k ) B 4 2 k = T 1 k X ˜ 2 B 4 2 k + i = 1 k T 1 i ( T 1 1 ( S + B 2 ) B 4 k ) ( B 4 + N ) k i = T 1 k X ˜ 2 B 4 2 k + i = 1 k T 1 i ( X 2 B 4 k + 1 ) ( B 4 + N ) k i = 2 T 1 k X ˜ 2 B 4 2 k = 0 ,
which implies that X ˜ 2 B 4 2 k = i = 1 k T 1 1 i X 2 B 4 k + i = 0 .
By applying (11) and (12), we have
( i = 1 k T 1 1 i X 2 B 4 k + i ) B 4 k 5 = ( i = 1 k T 1 1 i B 2 B 4 k + 2 + i ) B 4 k 5 = B 2 B 4 2 k 2 = 0 .
It follows that
X ˜ 2 B 4 2 k B 4 k 5 = X ˜ 2 B 4 2 k B 4 k 4 = = X ˜ 2 B 4 2 k B 4 3 k 7 = 0 ,
which leads that B 2 B 4 2 k 2 = B 2 B 4 2 k 3 = = B 2 B 4 = B 2 = 0 .
Using T B 2 + S B 4 = 0 , we have
S B 4 = U 2 S 1 R 1 T 2 S 2 0 N 5 U 2 * = U 2 S 1 T 2 S 1 S 2 + R 1 N 5 U 2 * = 0 ,
where U = U 1 I 0 0 U 2 . It follows that S 1 = 0 and R 1 N 5 = 0 . Therefore, we get
A = U T 1 0 R 1 0 0 0 0 0 N 4 U * , B = U 0 0 0 0 T 2 S 2 0 0 N 5 U * ,
where R 1 N 5 = S 2 N 4 = 0 and N 4 N 5 . By Theorem 4 . 1 , A s , S B . □
Example 1. 
Consider the matrices
A = 1 0 0 1 0 1 0 1 0 0 0 0 0 0 0 1 , B = 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 .
It is obvious that A B = 0 .
By calculating the matrices, it can be obtained that
A + B = 1 0 0 1 0 1 0 1 0 0 1 0 0 0 0 1 , A S = 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 , B S = 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0
and
( A + B ) S = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ,
that is ( A + B ) S = A S + B S and A S B = 0 . Then we have A S B = B A S = A B S = B S A = 0 , i.e. A s , S B .
But if A B 0 , we consider the matrices
A = 1 0 0 1 0 1 0 1 0 0 0 0 0 0 0 0 , B = 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 .
It is obvious that A S B = 0 and ( A + B ) S = A S + B S . But
A B S = 1 0 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 = 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 .
Thus, we can not get that A s , S B .
Corollary 2. 
Let A , B C n × n ,and I n d ( A ) = I n d ( B ) = k . Then the following are equivalent:
(i) A s , S B ;
(ii) ( A + B ) S = A S + B S , B A S = 0 and A B = 0 ;
(iii) ( A + B ) S = A S + B S , A B .
Proof. 
( i ) ( i i ) It follows from Theorem 4 . 2 .
( i i ) ( i i i ) Applying Remark 3 . 1 , we have that A S B is equivalent to A S B = 0 and B A = 0 . □
Theorem 8. 
Let A , B C n × n ,and I n d ( A ) = I n d ( B ) = k . Then the following are equivalent:
(i) A s , S B ;
(ii) A CS A + B * , B CS B + A * .
Proof. 
( i ) ( i i ) Let A s , S B , i.e. A S B and B S A . By Definition 2 . 2 and A B S = 0 , we have
A B S B k + 1 = 0 A B k = 0 A B k ( B S ) k ( B B S ) = 0 A ( B B S ) = 0 ,
which implies A B = A B S = 0 . It follows that B * A * A A D = ( A B ) * A A D = 0 . According to Theorem 3 . 10 , we get A CS A + B * . In the same way, there is that B CS B + A * .
( i i ) ( i ) It is clear by Theorem 3 . 10 .

Funding

This work was supported by the National Natural Science Foundation of China (No.12061015); Guangxi Science and Technology Base and Talents Special Project (No.GUIKE21220024) and Guangxi Natural Science Foundation (No.2018GXNSFDA281023).

Conflicts of Interest

No potential conflict of interest was reported by the authors.

References

  1. Hestenes MR. Relative hermitian matrices. Pacific Journal of Mathematics, 1961, 11(1): 225-245.
  2. Hartwig RE, Styan GPH. On some characterizations of the “star” partial ordering for matrices and rank subtractivity. Linear Algebra and its Applications, 1986, 82: 145-161. [CrossRef]
  3. Ferreyra DE, Malik SB. Core and strongly core orthogonal matrices. Linear and Multilinear Algebra, 2021, 70(20): 5052-5067. [CrossRef]
  4. Liu X, Wang C, Wang H. Further results on strongly core orthogonal matrix. Linear and Multilinear Algebra, 2023, 71(15): 2543-2564. [CrossRef]
  5. Mosić D, Dolinar G, Kuzma B, Marovt J. Core-EP orthogonal operators. Linear and Multilinear Algebra, 2022: 1-15. [CrossRef]
  6. Moore EH. On the reciprocal of the general algebraic matrix. Bulletin of the American Mathematical Society, 1920, 26: 394-395.
  7. Wang H, Liu N. The C-S inverse and its applications. Bulletin of the Malaysian Mathematical Sciences Society, 2023, 46(3): 90. [CrossRef]
  8. Bjerhammar, A. Application of calculus of matrices to method of least squares: with special reference to geodetic calculations. Transactions of the Royal Institute of Technology, Stockholm, Sweden, 1951, 49: 82-84.
  9. Penrose, R. A generalized inverse for matrices. Mathematical Proceedings of the Cambridge Philosophical Society, 1955, 51(3): 406-413. [CrossRef]
  10. Ben-Israel A, Greville TNE. Generalized Inverses: Theory and Applications, 2nd edition. Springer, New York, 2003. [CrossRef]
  11. Baksalary OM, Trenkler G. Core inverse of matrices. Linear and Multilinear Algebra, 2010, 58(6): 681-697. [CrossRef]
  12. Manjunatha PK, Mohana KS. Core-EP inverse. Linear and Multilinear Algebra, 2014, 62(6): 792-802. [CrossRef]
  13. Hartwig RE. How to partially order regular elements. Math Japan, 1980, 25: 1-13.
  14. Drazin MP. Natural structures on semigroups with involution. The Bulletin of the American Mathematical Society, 1978, 84(1): 139-141. [CrossRef]
  15. Mitra SK. On group inverses and the sharp order. Linear Algebra and its Applications, 1987, 92: 17-37. [CrossRef]
  16. Baksalary OM, Trenkler G. Core inverse of matrices. Linear Multilinear Algebra, 2010, 58(6): 681-697. [CrossRef]
  17. Wang, H. Core-EP decomposition and its applications. Linear Algebra and its Applications, 2016, 508: 289-300. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated