Issue
4open
Volume 2, 2019
Advances in Researches of Quaternion Algebras
Article Number 24
Number of page(s) 15
Section Mathematics - Applied Mathematics
DOI https://doi.org/10.1051/fopen/2019021
Published online 09 July 2019

© I.I. Kyrchei, Published by EDP Sciences, 2019

Licence Creative CommonsThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Introduction

In the whole article, the notation R $ \mathbb{R}$ is reserved for the real number field and H m × n $ {\mathbb{H}}^{m\times n}$ stands for the set of all m × n matrices over the quaternion skew field

H = { h 0 + h 1 i + h 2 j + h 3 k i 2 = j 2 = k 2 = ijk = - 1 , h 0 , h 1 , h 2 , h 3 R } . $$ \mathbb{H}=\left\{{h}_0+{h}_1\mathbf{i}+{h}_2\mathbf{j}+{h}_3\mathbf{k},{\mathbf{i}}^2={\mathbf{j}}^2={\mathbf{k}}^2=\mathbf{ijk}=-1,{h}_0,{h}_1,{h}_2,{h}_3\in \mathbb{R}\right\}. $$ H r m × n $ {\mathbb{H}}_r^{m\times n}$ specifies its subset of matrices with a rank r. For given h = h 0 + h 1 i + h 2 j + h 3 k H $ h={h}_0+{h}_1\mathbf{i}+{h}_2\mathbf{j}+{h}_3\mathbf{k}\in \mathbb{H}$, the conjugate of h is h ̅ = a 0 - h 1 i - h 2 j - h 3 k $ \bar{h}={a}_0-{h}_1\mathbf{i}-{h}_2\mathbf{j}-{h}_3\mathbf{k}$. For given A H n × m $ \mathbf{A}\in {\mathbb{H}}^{n\times m}$, A* represents the conjugate transpose (Hermitian adjoint) matrix of A. The matrix A H n × n $ \mathbf{A}\in {\mathbb{H}}^{n\times n}$ is Hermitian if A* = A. A means the Moore–Penrose inverse of A H n × m $ \mathbf{A}\in {\mathbb{H}}^{n\times m}$, i.e. the exclusive matrix X satisfying the following four equations

( 1 )   AXA   =   A , ( 2 )   XAX   =   X , ( 3 )   ( AX ) *   =   AX , ( 4 )   ( XA ) *   =   XA . $$ (1)\enspace \mathbf{AXA}\enspace =\mathrm{\enspace }\mathbf{A},\hspace{1em}(2)\enspace \mathbf{XAX}\mathrm{\enspace }=\mathrm{\enspace }\mathrm{X},\hspace{1em}(3)\enspace {(\mathbf{AX})}^{\mathrm{*}}\mathrm{\enspace }=\mathrm{\enspace }\mathbf{AX},\hspace{1em}(4)\enspace {(\mathbf{XA})}^{\mathrm{*}}\mathrm{\enspace }=\mathrm{\enspace }\mathbf{XA}. $$

Quaternions have ample use in diverse areas such, such as color imaging and computer science [15], fluid mechanics [6, 7], quantum mechanics [8, 9], the attitude orientation and spatial rigid body dynamics [1012], signal processing [1315], etc.

The research of matrix equations have both applied and theoretical importance. Many authors explored the system of two-sided matrix equations

{ A 1 X B 1 = C 1 , A 2 X B 2 = C 2 . $$ \left\{\begin{array}{c}{\mathbf{A}}_1\mathbf{X}{\mathbf{B}}_1={\mathbf{C}}_1,\\ {\mathbf{A}}_2\mathbf{X}{\mathbf{B}}_2={\mathbf{C}}_2.\end{array}\right. $$(1)over the field of complex numbers, the quaternion skew field, etc. (see, e.g. [1621]). In this paper, the following system of quaternion matrix equations with η-Hermicity are considered,

{ A 1 X A 1 η * = C 1 , A 2 X A 2 η * = C 2 . $$ \left\{\begin{array}{c}{\mathbf{A}}_{\mathbf{1}}\mathbf{X}{\mathbf{A}}_{\mathbf{1}}^{\mathbf{\eta }\mathbf{*}}={\mathbf{C}}_{\mathbf{1}},\\ {\mathbf{A}}_{\mathbf{2}}\mathbf{X}{\mathbf{A}}_{\mathbf{2}}^{\mathbf{\eta }\mathbf{*}}={\mathbf{C}}_{\mathbf{2}}.\end{array}\right. $$(2)

Definition 1.1.

[2224] A matrix A H n × n $ \mathbf{A}\in {\mathbb{H}}^{\mathrm{n}\times \mathrm{n}}$ is known to be η-Hermitian and η-skew-Hermitian if A = A η * = - η A * η $ \mathbf{A}={\mathbf{A}}^{\eta *}=-\eta {\mathbf{A}}^{*}\eta $ and A = - A η * = η A * η $ \mathbf{A}=-{\mathbf{A}}^{\eta *}=\eta {\mathbf{A}}^{*}\eta $, respectively, where η { i , j , k } $ \eta \in \{\mathbf{i},\mathbf{j},\mathbf{k}\}$.

Convergence analysis in statistical signal processing and linear modeling [14, 15, 23] are some fields in which the applications of η-Hermitian matrices matrices can be viewed. The singular value decomposition of the η-Hermitian matrix was examined in [22]. Very recently, Liu [25] determined η-skew-Hermitian solutions to some classical matrix equations and, among them, the generalized Sylvester-type matrix equation:

AX A η * + BY B η * = C . $$ \mathbf{AX}{\mathbf{A}}^{\eta \mathrm{*}}+\mathbf{BY}{\mathbf{B}}^{\eta \mathrm{*}}=\mathbf{C}. $$(3)Note that in [25], the term “η-anti-Hermitian” has been used instead “η-skew-Hermitian”. He and Wang [26] gave the general solution of

AX + ( AX ) η * + BY B η * + CZ C η * = D , $$ \mathbf{AX}+(\mathbf{AX}{)}^{\eta \mathrm{*}}+\mathbf{BY}{\mathbf{B}}^{\eta \mathrm{*}}+\mathbf{CZ}{\mathbf{C}}^{\eta \mathrm{*}}=\mathbf{D}, $$bearing η-Hermicity over H $ \mathbb{H}$ by expressing it’s general η-Hermitian solution in terms of the Moore–Penrose inverses. An iterative algorithm for determining η (-skew)-Hermitian least-squares solutions to the quaternion matrix equation (3) was established in [27]. For more related papers on η-Hermicity and its generalization, ϕ-Hermicity, one may refer to [2838].

In this paper, we construct novel explicit determinantal representation formulas (an analog of Cramer’s rule) of the general and η-(skew-)Hermitian solutions to the system (2), by using determinantal representations of the Moore–Penrose matrix that was obtained within in the framework of the theory of row-column noncommutative determinants. According to our best of knowledge, our Cramer’s rule proposed is a unique direct method to compute the η-(skew-)Hermitian solutions to quaternion matrix equations unlike other similar works (see, e.g. [2426, 29, 32]), where obtained explicit forms of solutions have mostly only theoretical significance.

In contrast to the inverse matrix that has a definitely determinantal representation in terms of cofactors, for generalized inverse matrices, in particular, Moore–Penrose matrices, there exist different determinantal representations even for matrices with real or complex entries as a result of the search of their more applicable explicit expressions (for the Moore–Penrose matrix, see, e.g., [3941]). For quaternion matrices, in view of the noncommutativity of quaternions, the problem of the determinantal representation of generalized inverse matrices remained open for a long time and only now can be solved due to the theory of row-column determinants which were introduced in [42, 43].

Currently, applying of row-column determinants to determinantal representations of various generalized inverses have been derived by the author (see, e.g. [4457]) and other researchers (see, e.g. [5861]). In particular, determinantal representations of systems like to (1) have been recently explored in [53, 55, 56, 61].

The remainder of the paper is directed as follows. In Section 2, we start with preliminaries in general properties generalized inverses, projectors, and η-matrices in Section 2.1, and in the theory of row-column determinants and determinantal representations of the Moore–Penrose inverses of a quaternion matrix, its Hermitian adjoint and η-Hermitian adjoint matrices in Section 2.2. Determinantal representations of a general, η-Hermitian and η-skew-Hermitian solutions to the system (2) are derived in Section 3. Finally, the conclusion is drawn in Section 4.

Preliminaries: Determinantal representations of solutions to quaternion matrix equations

General properties generalized inverses, projectors, and η-matrices

We begin with some famous results on generalized inverses and projectors inducted by them which will be used in the remaining part of this paper.

Lemma 2.1.

[26] Let A H m × n $ \mathbf{A}\in {\mathbb{H}}^{m\times n}$ . Then

  1. ( A η ) = ( A ) η , ( A η * ) = ( A ) η * . $ ({\mathbf{A}}^{\eta }{)}^{\dagger }=({\mathbf{A}}^{\dagger }{)}^{\eta },({\mathbf{A}}^{\eta \mathrm{*}}{)}^{\dagger }=({\mathbf{A}}^{\dagger }{)}^{\eta \mathrm{*}}.$

  2. rank A = rank A η * = rank A η = rank A η A η * = rank ( A η * A η ) . $ \mathrm{rank}\mathbf{A}=\mathrm{rank}{\mathbf{A}}^{\eta \mathrm{*}}=\mathrm{rank}{\mathbf{A}}^{\eta }=\mathrm{rank}{\mathbf{A}}^{\eta }{\mathbf{A}}^{\eta \mathrm{*}}=\mathrm{rank}({\mathbf{A}}^{\eta \mathrm{*}}{\mathbf{A}}^{\eta }).$

  3. ( A A ) η * = A η * ( A ) η * = ( A A ) η = ( A ) η A η . $ ({\mathbf{A}}^{\dagger }\mathbf{A}{)}^{\eta \mathrm{*}}={\mathbf{A}}^{\eta \mathrm{*}}({\mathbf{A}}^{\dagger }{)}^{\eta \mathrm{*}}=({\mathbf{A}}^{\dagger }\mathbf{A}{)}^{\eta }=({\mathbf{A}}^{\dagger }{)}^{\eta }{\mathbf{A}}^{\eta }.$

  4. ( A A ) η * = ( A ) η * A η * = ( A A ) η = A η ( A ) η . $ (\mathbf{A}{\mathbf{A}}^{\dagger }{)}^{\eta \mathrm{*}}=({\mathbf{A}}^{\dagger }{)}^{\eta \mathrm{*}}{\mathbf{A}}^{\eta \mathrm{*}}=(\mathbf{A}{\mathbf{A}}^{\dagger }{)}^{\eta }={\mathbf{A}}^{\eta }({\mathbf{A}}^{\dagger }{)}^{\eta }.$

  5. L A η * = - η ( L A ) η = L A η = L A η = R A η * . $ {\mathbf{L}}_A^{\eta \mathrm{*}}=-\eta ({\mathbf{L}}_A)\eta ={\mathbf{L}}_A^{\eta }={\mathbf{L}}_{{A}^{\eta }}={\mathbf{R}}_{{A}^{\eta \mathrm{*}}}.$

  6. R A η * = - η ( R A ) η = R A η = L A η * = R A η . $ {\mathbf{R}}_A^{\eta \mathrm{*}}=-\eta ({\mathbf{R}}_A)\eta ={\mathbf{R}}_A^{\eta }={\mathbf{L}}_{{A}^{\eta \mathrm{*}}}={\mathbf{R}}_{{A}^{\eta }}.$

Lemma 2.2.

[71] Let A , B and C be given matrices with right sizes over H $ \mathbb{H}$ . Then

  1. A = ( A * A ) A * = A * ( A A * ) . $ {\mathbf{A}}^{\dagger }=({\mathbf{A}}^{\mathrm{*}}\mathbf{A}{)}^{\dagger }{\mathbf{A}}^{\mathrm{*}}={\mathbf{A}}^{\mathrm{*}}(\mathbf{A}{\mathbf{A}}^{\mathrm{*}}{)}^{\dagger }.$

  2. L A = L A 2 = L A * , R A = R A 2 = R A * . $ {\mathbf{L}}_A={\mathbf{L}}_A^2={\mathbf{L}}_A^{\mathrm{*}},{\mathbf{R}}_A={\mathbf{R}}_A^2={\mathbf{R}}_A^{\mathrm{*}}.$

  3. L A ( B L A ) = ( B L A ) , ( R A C ) R A = ( R A C ) . $ {\mathbf{L}}_A(\mathbf{B}{\mathbf{L}}_A{)}^{\dagger }=(\mathbf{B}{\mathbf{L}}_A{)}^{\dagger },({\mathbf{R}}_A\mathbf{C}{)}^{\dagger }{\mathbf{R}}_A=({\mathbf{R}}_A\mathbf{C}{)}^{\dagger }.$

Remark 2.1.

For any η l { i , j , k } $ {\eta }_{\mathrm{l}}\in \{\mathbf{i},\mathbf{j},\mathbf{k}\}$ for all l = 1, 2, 3, and q = q 0 + q 1 η 1 + q 2 η 2 + q 3 η 3, we denote

q η 1 - η 1 q η 1 = q 0 + q 1 η 1 - q 2 η 2 - q 3 η 3 , q - η 1 η 1 q η 1 = - q 0 - q 1 η 1 + q 2 η 2 + q 3 η 3 . $$ \begin{array}{c}{q}^{{\eta }_1}:= -{\eta }_1q{\eta }_1={q}_0+{q}_1{\eta }_1-{q}_2{\eta }_2-{q}_3{\eta }_3,\\ {q}^{-{\eta }_1}:={\eta }_1q{\eta }_1=-{q}_0-{q}_1{\eta }_1+{q}_2{\eta }_2+{q}_3{\eta }_3.\end{array} $$

So, elements of the main diagonal of an η 1-Hermitian matrix A = A η 1 * = ( a ij η 1 * ) $ \mathbf{A}={\mathbf{A}}^{{\eta }_1\mathrm{*}}=\left({a}_{{ij}}^{{\eta }_1\mathrm{*}}\right)$ should be as follows

a ii η 1 * = a 0 + a 2 η 2 + a 3 η 3 , $$ {a}_{{ii}}^{{\eta }_1\mathrm{*}}={a}_0+{a}_2{\eta }_2+{a}_3{\eta }_3, $$and a pair of elements which are symmetric with respect to the main diagonal can be represented as

a ij η 1 * = a 0 + a 1 η 1 + a 2 η 2 + a 3 η 3 , a ji η 1 * = a 0 - a 1 η 1 + a 2 η 2 + a 3 η 3 . $$ \begin{array}{c}{a}_{{ij}}^{{\eta }_1\mathrm{*}}={a}_0+{a}_1{\eta }_1+{a}_2{\eta }_2+{a}_3{\eta }_3,\\ {a}_{{ji}}^{{\eta }_1\mathrm{*}}={a}_0-{a}_1{\eta }_1+{a}_2{\eta }_2+{a}_3{\eta }_3.\end{array} $$

Similarly, elements of the main diagonal of an η 1-skew-Hermitian matrix A = - A η 1 * = ( a ij - η 1 * ) $ \mathbf{A}=-{\mathbf{A}}^{{\eta }_1\mathrm{*}}=\left({a}_{{ij}}^{-{\eta }_1\mathrm{*}}\right)$ should be as follows

a ii - η 1 * = a 1 η 1 , $$ {a}_{{ii}}^{-{\eta }_1\mathrm{*}}={a}_1{\eta }_1, $$and a pair of elements which are symmetric with respect to the main diagonal can be represented as

a ij - η 1 * = a 0 + a 1 η 1 + a 2 η 2 + a 3 η 3 , a ji - η 1 * = - a 0 + a 1 η 1 - a 2 η 2 - a 3 η 3 . $$ \begin{array}{c}{a}_{{ij}}^{-{\eta }_1\mathrm{*}}={a}_0+{a}_1{\eta }_1+{a}_2{\eta }_2+{a}_3{\eta }_3,\\ {a}_{{ji}}^{-{\eta }_1\mathrm{*}}=-{a}_0+{a}_1{\eta }_1-{a}_2{\eta }_2-{a}_3{\eta }_3.\end{array} $$where a l R $ {a}_l\in \mathbb{R}$ for all l = 0,…, 3.

Determinantal representations of generalized inverses and of solutions to some quaternion matrix equations

Through the non-commutativity of the quaternion skew field, determining of the determinant with noncommutative entries (it is also called a noncommutative determinant) is not so trivial (see, e.g. [62, 63]). There are several versions of the definition of noncommutative determinants (see, e.g., [6469]). But, it is proved in [70], if all functional properties of determinant over a ring are satisfied, then it takes on a value in its commutative subset only. In particular, it means that such determinant can not be expanded by cofactors along an arbitrary row or column. To avoid these difficulties, for A H n × n $ \mathbf{A}\in {\mathbb{H}}^{n\times n}$, we define n row determinants and n column determinants which are not owning of all functional properties that could be inherent to the usual determinant.

Suppose S n is the symmetric group on the set I n = { 1 , , n } $ {I}_n=\{1,\dots,n\}$.

Definition 2.2.

[42] The ith row determinant of A = ( a ij ) H n × n $ \mathbf{A}=({a}_{{ij}})\in {\mathbb{H}}^{n\times n}$ is called by setting for all i = 1, …, n,

rde t i A = σ S n ( - 1 ) n - r ( a i i k 1 a i k 1 i k 1 + 1 a i k 1 + l 1 i ) ( a i k r i k r + 1 a i k r + l r i k r ) , σ = ( i i k 1 i k 1 + 1 i k 1 + l 1 ) ( i k 2 i k 2 + 1 i k 2 + l 2 ) ( i k r i k r + 1 i k r + l r ) , $$ \begin{array}{c}\mathrm{rde}{\mathrm{t}}_i\mathbf{A}=\sum_{\mathrm{\sigma }\in {S}_n} {\left(-1\right)}^{n-r}({a}_{i{i}_{{k}_1}}{a}_{{i}_{{k}_1}{i}_{{k}_1+1}}\dots {a}_{{i}_{{k}_1+{l}_1}i})\dots ({a}_{{i}_{{k}_r}{i}_{{k}_r+1}}\dots {a}_{{i}_{{k}_r+{l}_r}{i}_{{k}_r}}),\\ \sigma =\left(i{i}_{{k}_1}{i}_{{k}_1+1}\dots {i}_{{k}_1+{l}_1}\right)\left({i}_{{k}_2}{i}_{{k}_2+1}\dots {i}_{{k}_2+{l}_2}\right)\dots \left({i}_{{k}_r}{i}_{{k}_r+1}\dots {i}_{{k}_r+{l}_r}\right),\end{array} $$where σ is the left-ordered permutation. It means that its first cycle from the left starts with i, other cycles start from the left with the minimal of all the integers which are contained in it,

i k t < i k t + s for   all t = 2 ,   , r , s = 1 , , l t , $$ \begin{array}{ccc}{i}_{{k}_t} < {i}_{{k}_t+s}& \mathrm{for}\enspace \mathrm{all}\hspace{1em}t=2,\enspace \dots,r,& s=1,\dots,{l}_t,\end{array} $$and the order of disjoint cycles (except for the first one) is strictly conditioned by increase from left to right of their first elements, i k 2 < i k 3 < < i k r $ {i}_{{k}_2}<{i}_{{k}_3}<\cdots < {i}_{{k}_r}$.

Definition 2.3.

[42] The jth column determinant of A = ( a ij ) H n × n $ \mathbf{A}=({a}_{{ij}})\in {\mathbb{H}}^{n\times n}$ is called by setting for all j = 1, …, n,

cde t j A = τ S n ( - 1 ) n - r ( a j k r j k r + l r a j k r + 1 j k r ) ( a j j k 1 + l 1 a j k 1 + 1 j k 1 a j k 1 j ) , τ = ( j k r + l r j k r + 1 j k r ) ( j k 2 + l 2 j k 2 + 1 j k 2 ) ( j k 1 + l 1 j k 1 + 1 j k 1 j ) , $$ \begin{array}{c}\mathrm{cde}{\mathrm{t}}_j\mathbf{A}=\sum_{\tau \in {S}_n} (-1{)}^{n-r}({a}_{{j}_{{k}_r}{j}_{{k}_r+{l}_r}}\dots {a}_{{j}_{{k}_r+1}{j}_{{k}_r}})\dots ({a}_{j{j}_{{k}_1+{l}_1}}\dots {a}_{{j}_{{k}_1+1}{j}_{{k}_1}}{a}_{{j}_{{k}_1}j}),\\ \tau =\left({j}_{{k}_r+{l}_r}\dots {j}_{{k}_r+1}{j}_{{k}_r}\right)\dots \left({j}_{{k}_2+{l}_2}\dots {j}_{{k}_2+1}{j}_{{k}_2}\right)\left({j}_{{k}_1+{l}_1}\dots {j}_{{k}_1+1}{j}_{{k}_1}j\right),\end{array} $$where τ is the right-ordered permutation. It means that its first cycle from the right starts with j, other cycles start from the right with the minimal of all the integers which are contained in it,

j k t < j k t + s for   all t = 2 , , r , s = 1 , , l t , $$ \begin{array}{ccc}{j}_{{k}_t} < {j}_{{k}_t+s}& \mathrm{for}\enspace \mathrm{all}\hspace{1em}t=2,\dots,r,& s=1,\dots,{l}_t,\end{array} $$and the order of disjoint cycles (except for the first one) is strictly conditioned by increase from right to left of their first elements, j k 2 < j k 3 < < j k r $ {j}_{{k}_2}<{j}_{{k}_3}<\cdots < {j}_{{k}_r}$.

Remark 2.4.

So, for a 2×2-matrix with quaternion settings A = [ a 11 a 12 a 21 a 22 ] $ \mathbf{A}=\left[\begin{array}{ll}{a}_{11}& {a}_{12}\\ {a}_{21}& {a}_{22}\end{array}\right]$, we have the four (row-column) determinants

rde t 1 A = a 11 a 22 - a 12 a 21 , rde t 2 A = a 22 a 11 - a 21 a 12 , cde t 1 A = a 22 a 11 - a 12 a 21 , cde t 2 A = a 11 a 22 - a 21 a 12 . $$ \begin{array}{c}\begin{array}{cc}\mathrm{rde}{\mathrm{t}}_1\mathbf{A}={a}_{11}{a}_{22}-{a}_{12}{a}_{21},& \mathrm{rde}{\mathrm{t}}_2\mathbf{A}={a}_{22}{a}_{11}-{a}_{21}{a}_{12},\end{array}\\ \begin{array}{cc}\mathrm{cde}{\mathrm{t}}_1\mathbf{A}={a}_{22}{a}_{11}-{a}_{12}{a}_{21},& \mathrm{cde}{\mathrm{t}}_2\mathbf{A}={a}_{11}{a}_{22}-{a}_{21}{a}_{12}.\end{array}\end{array} $$

Since a ij H $ {a}_{{ij}}\in \mathbb{H}$ for all i, j = 1, 2, they are not equal to each others, in general.

We state some properties of row-column determinants needed below.

Lemma 2.3.

[42] If the ith row of A H n × n $ \mathbf{A}\in {\mathbb{H}}^{n\times n}$ is a left linear combination of other row vectors, i.e. a i . = α 1 b 1 + + α k b k $ {a}_{i.}={\alpha }_1{\mathbf{b}}_1+\cdots +{\alpha }_k{\mathbf{b}}_k$ , where α l H $ {\alpha }_l\in \mathbb{H}$ and b l H 1 × n $ {\mathbf{b}}_l\in {\mathbb{H}}^{1\times n}$ for all l = 1, …, k and i = 1, …, n, then

rde t i   A i . ( α 1 b 1 + + α k b k ) = l α l rde t i   A i . ( b l ) . $$ \mathrm{rde}{\mathrm{t}}_i\enspace {\mathbf{A}}_{i.}\left({\alpha }_1{\mathbf{b}}_1+\cdots +{\alpha }_k{\mathbf{b}}_k\right)=\sum_l {\alpha }_l\mathrm{rde}{\mathrm{t}}_i\enspace {\mathbf{A}}_{i.}\left({\mathbf{b}}_l\right). $$

Lemma 2.4.

[42] If the jth column of A H m × n $ \mathbf{A}\in {\mathbb{H}}^{m\times n}$ is a right linear combination of other column vectors, i.e. a . j = b 1 α 1 + + b k α k $ {a}_{.j}={\mathbf{b}}_1{\alpha }_1+\cdots +{\mathbf{b}}_k{\alpha }_k$ , where α l H $ {\alpha }_l\in \mathbb{H}$ and b l H n × 1 $ {\mathbf{b}}_l\in {\mathbb{H}}^{n\times 1}$ for all l = 1, …, k and j = 1, …, n, then

cde t j   A . j ( b 1 α 1 + + b k α k ) = l cde t j   A . j ( b l ) α l . $$ \mathrm{cde}{\mathrm{t}}_j\enspace {\mathbf{A}}_{.j}\left({\mathbf{b}}_1{\alpha }_1+\cdots +{\mathbf{b}}_k{\alpha }_k\right)=\sum_l \mathrm{cde}{\mathrm{t}}_j\enspace {\mathbf{A}}_{.j}\left({\mathbf{b}}_l\right){\alpha }_l. $$

Lemma 2.5.

[43] Let A H n × n $ \mathbf{A}\in {\mathbb{H}}^{n\times n}$ . Then cde t i   A * = rde t i   A ̅ $ {cde}{t}_i\enspace {\mathbf{A}}^{*}=\overline{{rde}{t}_i\enspace \mathbf{A}}$ , rde t i   A * = cde t i   A ̅ $ {rde}{t}_i\enspace {\mathbf{A}}^{*}=\overline{{cde}{t}_i\enspace \mathbf{A}}$ for all i = 1, …, n.

Since by Definitions 2.2 and 2.3 for A H n × n $ \mathbf{A}\in {\mathbb{H}}^{n\times n}$

rde t i   A η = rde t i   ( - η A η ) = - η ( rde t i   A ) η , cde t i   A η = cde t i   ( - η A η ) = - η ( cde t i   A ) η , rde t i   ( - A η ) = rde t i   ( η A η ) = ( - 1 ) n - 1 η ( rde t i   A ) η , cde t i   ( - A η ) = cde t i   ( η A η ) = ( - 1 ) n - 1 η ( cde t i   A ) η , $$ \begin{array}{c}\mathrm{rde}{\mathrm{t}}_i\enspace {\mathbf{A}}^{\eta }=\mathrm{rde}{\mathrm{t}}_i\enspace (-\eta \mathbf{A}\eta )=-\eta (\mathrm{rde}{\mathrm{t}}_i\enspace \mathbf{A})\eta,\\ \mathrm{cde}{\mathrm{t}}_i\enspace {\mathbf{A}}^{\eta }=\mathrm{cde}{\mathrm{t}}_i\enspace (-\eta \mathbf{A}\eta )=-\eta (\mathrm{cde}{\mathrm{t}}_i\enspace \mathbf{A})\eta,\\ \begin{array}{c}\mathrm{rde}{\mathrm{t}}_i\enspace (-{\mathbf{A}}^{\eta })=\mathrm{rde}{\mathrm{t}}_i\enspace (\eta \mathbf{A}\eta )=(-1{)}^{n-1}\eta (\mathrm{rde}{\mathrm{t}}_i\enspace \mathbf{A})\eta,\\ \mathrm{cde}{\mathrm{t}}_i\enspace (-{\mathbf{A}}^{\eta })=\mathrm{cde}{\mathrm{t}}_i\enspace (\eta \mathbf{A}\eta )=(-1{)}^{n-1}\eta \left(\mathrm{cde}{\mathrm{t}}_i\enspace \mathbf{A}\right)\eta,\end{array}\end{array} $$for all i = 1, …, n, then, due to Lemma 2.5, the next lemma follows immediately.

Lemma 2.6.

Let A H n × n $ \mathbf{A}\in {\mathbb{H}}^{n\times n}$ . Then

rde t i   A η * = - η ( cde t i   A ̅ ) η , cde t i   A η * = - η ( rde t i   A ̅ ) η ,   rde t i   ( - A η * ) = ( - 1 ) n - 1 η ( cde t i   A ̅ ) η ,   cde t i   ( - A η * ) = ( - 1 ) n - 1 η ( rde t i   A ̅ ) η , $$ \begin{array}{c}\begin{array}{cc}\mathrm{rde}{\mathrm{t}}_i\enspace {\mathbf{A}}^{\eta \mathrm{*}}=-\eta (\overline{\mathrm{cde}{\mathrm{t}}_i\enspace \mathbf{A}})\eta,& \mathrm{cde}{\mathrm{t}}_i\enspace {\mathbf{A}}^{\eta \mathrm{*}}=-\eta (\overline{\mathrm{rde}{\mathrm{t}}_i\enspace \mathbf{A}})\eta,\end{array}\enspace \\ \begin{array}{cc}\mathrm{rde}{\mathrm{t}}_i\enspace (-{\mathbf{A}}^{\eta \mathrm{*}})=(-1{)}^{n-1}\eta (\overline{\mathrm{cde}{\mathrm{t}}_i\enspace \mathbf{A}})\eta,\enspace & \mathrm{cde}{\mathrm{t}}_i\enspace (-{\mathbf{A}}^{\eta \mathrm{*}})=(-1{)}^{n-1}\eta (\overline{\mathrm{rde}{\mathrm{t}}_i\enspace \mathbf{A}})\eta,\end{array}\end{array} $$ for all i = 1, …, n.

Remark 2.5.

Since [42] for Hermitian A we have

rde t 1 A = = rde t n A = cde t 1 A = = cde t n A R , $$ \mathrm{rde}{\mathrm{t}}_1\mathbf{A}=\cdots =\mathrm{rde}{\mathrm{t}}_n\mathbf{A}=\mathrm{cde}{\mathrm{t}}_1\mathbf{A}=\cdots =\mathrm{cde}{\mathrm{t}}_n\mathbf{A}\in \mathbb{R}, $$the determinant of a Hermitian matrix is called by setting det A rde t i   A = cde t i   A $ \mathrm{det}\mathbf{A}:=\mathrm{rde}{\mathrm{t}}_i\enspace \mathbf{A}=\mathrm{cde}{\mathrm{t}}_i\enspace \mathbf{A}$ for any i = 1, …, n.

Its properties have been completely studied in [43]. In particular, from them it follows the definition of the determinantal rank of a quaternion matrix A as the largest possible size of nonzero principal minors of its corresponding Hermitian matrices, i.e. rank A = rank ( A * A ) = rank ( A A * ) $ \mathrm{rank}\mathbf{A}=\mathrm{rank}({\mathbf{A}}^{\mathrm{*}}\mathbf{A})=\mathrm{rank}(\mathbf{A}{\mathbf{A}}^{\mathrm{*}})$.

For determinantal representations of the Moore–Penrose inverse, we use the following notations. Let α { α 1 , , α k } { 1 , , m } $ \alpha:=\left\{{\alpha }_1,\dots,{\alpha }_k\right\}\subseteq \left\{1,\dots,m\right\}$ and β { β 1 , , β k } { 1 , , n } $ \beta:=\left\{{\beta }_1,\dots,{\beta }_k\right\}\subseteq \left\{1,\dots,n\right\}$ be subsets with 1 k min { m , n } $ 1\le k\le \mathrm{min}\left\{m,n\right\}$. By A β α $ {\mathbf{A}}_{\beta }^{\alpha }$ denote a submatrix of A H m × n $ \mathbf{A}\in {\mathbb{H}}^{m\times n}$ with rows and columns indexed by α and β, respectively. Then, A α α $ {\mathbf{A}}_{\alpha }^{\alpha }$ is a principal submatrix of A with rows and columns indexed by α. Moreover, for Hermitian A, | A | α α $ |\mathbf{A}{|}_{\alpha }^{\alpha }$ is the principal minor of det A. Suppose that,

L k , n { α : α = ( α 1 , , α k ) , 1 α 1 < < α k n   } , $$ {L}_{k,n}:=\left\{\begin{array}{cc}\alpha:\alpha =\left({\alpha }_1,\dots,{\alpha }_k\right),& 1\le {\alpha }_1<\cdots < {\alpha }_k\le n\end{array}\enspace \right\}, $$stands for the collection of strictly increasing sequences of 1 ≤ k ≤ n integers chosen from {1, …, n}. For fixed i α $ i\in \alpha $ and j β $ j\in \beta $, put I r , m { i } { α : α L r , m , i α } $ {I}_{r,m}\left\{i\right\}:=\left\{\alpha:\alpha \in {L}_{r,m},i\in \alpha \right\}$, J r , n { j } { β : β L r , n , j β } $ {J}_{r,n}\left\{j\right\}:=\left\{\beta:\beta \in {L}_{r,n},j\in \beta \right\}$.

By a .   j $ {\mathbf{a}}_{.\enspace j}$ and a . j * $ {\mathbf{a}}_{.j}^{\mathrm{*}}$, a i . $ {\mathbf{a}}_{i.}$ and a i . * $ {\mathbf{a}}_{i.}^{\mathrm{*}}$ denote the jth columns and the ith rows of A and A*, respectively. Suppose A i . ( b ) $ {\mathbf{A}}_{i.}\left(\mathbf{b}\right)$ and A . j ( c ) $ {\mathbf{A}}_{.j}\left(\mathbf{c}\right)$ stand for the matrices obtained from A by replacing its ith row with the row b and its jth column with the column c, respectively.

Theorem 2.6.

[44] If A H r m × n $ \mathbf{A}\in {\mathbb{H}}_r^{m\times n}$ , then its Moore–Penrose inverse A = ( a ij ) H n × m $ {\mathbf{A}}^{\dagger }=\left({\mathrm{a}}_{{ij}}^{\dagger }\right)\in {\mathbb{H}}^{n\times m}$ is determined as follows

a ij = = β J r , n { i } cde t i ( ( A * A ) . i ( a . j * ) ) β β β J r , n | A * A | β β $$ \begin{array}{c}{a}_{{ij}}^{\dagger }=\\ =\frac{\sum_{\beta \in {J}_{r,n}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left({\left({\mathbf{A}}^{\mathrm{*}}\mathbf{A}\right)}_{.i}\left({\mathbf{a}}_{.j}^{\mathrm{*}}\right)\right)}_{\beta }^{\beta }}{\sum_{\beta \in {J}_{r,n}} {\left|{\mathbf{A}}^{\mathrm{*}}\mathbf{A}\right|}_{\beta }^{\beta }}\end{array} $$(4)

= α I r , m { j } rde t j ( ( A A * ) j . ( a i . * ) ) α α α I r , m | A A * | α α . $$ =\frac{\sum_{\alpha \in {I}_{r,m}\left\{j\right\}} \mathrm{rde}{\mathrm{t}}_j{\left((\mathbf{A}{\mathbf{A}}^{\mathrm{*}}{)}_{j.}({\mathbf{a}}_{i.}^{\mathrm{*}})\right)}_{\alpha }^{\alpha }}{\sum_{\alpha \in {I}_{r,m}} {\left|\mathbf{A}{\mathbf{A}}^{\mathrm{*}}\right|}_{\alpha }^{\alpha }}. $$(5)

Remark 2.7.

For an arbitrary full-rank matrix A H r m × n $ \mathbf{A}\in {\mathbb{H}}_r^{m\times n}$, a row-vector b H 1 × m $ \mathbf{b}\in {\mathbb{H}}^{1\times m}$, and a column-vector c H n × 1 $ \mathbf{c}\in {\mathbb{H}}^{n\times 1}$, we assume that for all i = 1, …, m, j = 1, …, n,

  • if rank A = n, then in (4)

cde t j ( ( A * A ) . j ( c ) ) = β J n , n { j } cde t j ( ( A * A ) . j ( c ) ) β β , det ( A * A ) = β J n , n | A * A | β β ; $$ \begin{array}{c}\mathrm{cde}{\mathrm{t}}_j\left({\left({\mathbf{A}}^{\mathrm{*}}\mathbf{A}\right)}_{.j}\left(\mathbf{c}\right)\right)=\sum_{\beta \in {J}_{n,n}\left\{j\right\}} \mathrm{cde}{\mathrm{t}}_j{\left({\left({\mathbf{A}}^{\mathrm{*}}\mathbf{A}\right)}_{.j}\left(\mathbf{c}\right)\right)}_{\beta }^{\beta },\\ \mathrm{det}\left({\mathbf{A}}^{\mathrm{*}}\mathbf{A}\right)=\sum_{\beta \in {J}_{n,n}} {\left|{\mathbf{A}}^{\mathrm{*}}\mathbf{A}\right|}_{\beta }^{\beta };\end{array} $$

  • if rank A = m, then in (5)

rde t i ( ( A A * ) i . ( b ) ) = α I m , m { i } rde t i ( ( A A * ) i . ( b ) ) α α , det ( A A * ) = α I m , m | A A * | α α . $$ \begin{array}{c}\mathrm{rde}{\mathrm{t}}_i\left((\mathbf{A}{\mathbf{A}}^{\mathrm{*}}{)}_{i.}\left(\mathbf{b}\right)\right)=\sum_{\alpha \in {I}_{m,m}\left\{i\right\}} \mathrm{rde}{\mathrm{t}}_i{\left((\mathbf{A}{\mathbf{A}}^{\mathrm{*}}{)}_{i.}\left(\mathbf{b}\right)\right)}_{\alpha }^{\alpha },\\ \mathrm{det}\left(\mathbf{A}{\mathbf{A}}^{\mathrm{*}}\right)=\sum_{\alpha \in {I}_{m,m}} {\left|\mathbf{A}{\mathbf{A}}^{\mathrm{*}}\right|}_{\alpha }^{\alpha }.\end{array} $$

Corollary 2.1.

If A H r m × n $ \mathbf{A}\in {\mathbb{H}}_r^{m\times n}$ , then the Moore–Penrose inverse ( A η ) = ( a ij η ) H n × m $ {\left({\mathbf{A}}^{\eta }\right)}^{\dagger }=\left({a}_{{ij}}^{\eta \dagger }\right)\in {\mathbb{H}}^{n\times m}$ have the following determinantal representations:

a ij η = - η β J r , n { i } cde t i ( ( A * A ) . i ( a . j * ) ) β β β J r , n | A * A | β β η = - η α I r , m { j } rde t j ( ( A A * ) j . ( a i . * ) ) α α α I r , m | A A * | α α η . $$ {a}_{{ij}}^{\eta \dagger }=-\eta \frac{\sum_{\beta \in {J}_{r,n}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left({\left({\mathbf{A}}^{\mathrm{*}}\mathbf{A}\right)}_{.i}\left({\mathbf{a}}_{.j}^{\mathrm{*}}\right)\right)}_{\beta }^{\beta }}{\sum_{\beta \in {J}_{r,n}} {\left|{\mathbf{A}}^{\mathrm{*}}\mathbf{A}\right|}_{\beta }^{\beta }}\eta =-\eta \frac{\sum_{\alpha \in {I}_{r,m}\left\{j\right\}} \mathrm{rde}{\mathrm{t}}_j{\left((\mathbf{A}{\mathbf{A}}^{\mathrm{*}}{)}_{j.}({\mathbf{a}}_{i.}^{\mathrm{*}})\right)}_{\alpha }^{\alpha }}{\sum_{\alpha \in {I}_{r,m}} {\left|\mathbf{A}{\mathbf{A}}^{\mathrm{*}}\right|}_{\alpha }^{\alpha }}\eta. $$

Remark 2.8.

Since ( A * ) = ( A ) * $ ({\mathbf{A}}^{*}{)}^{\dagger }=({\mathbf{A}}^{\dagger }{)}^{*}$, then we can use the denotation A , * ( A * ) $ {\mathbf{A}}^{\dagger,*}:=({\mathbf{A}}^{*}{)}^{\dagger }$ . By Lemma 2.5, for the Hermitian adjoint matrix A * H r n × m $ {\mathbf{A}}^{*}\in {\mathbb{H}}_r^{n\times m}$, its Moore–Penrose inverse ( A * ) = ( ( a ij * ) ) H m × n $ ({\mathbf{A}}^{*}{)}^{\dagger }=\left(({a}_{{ij}}^{*}{)}^{\dagger }\right)\in {\mathbb{H}}^{m\times n}$ can be expressed as

( a ij * ) = ( a ji ) ̅ = α I r , n { j } rde t j ( ( A * A ) j . ( a i   . ) ) α α β I r , n | A * A | α α = β J r , m { i } cde t i ( ( A A * ) . i ( a . j ) ) β β β J r , m | A A * | β β . $$ ({a}_{{ij}}^{\mathrm{*}}{)}^{\dagger }=\overline{({a}_{{ji}}{)}^{\dagger }}=\frac{\sum_{\alpha \in {I}_{r,n}\left\{j\right\}} \mathrm{rde}{\mathrm{t}}_j{\left({\left({\mathbf{A}}^{\mathrm{*}}\mathbf{A}\right)}_{j.}\left({\mathbf{a}}_{i\enspace.}\right)\right)}_{\alpha }^{\alpha }}{\sum_{\beta \in {I}_{r,n}} {\left|{\mathbf{A}}^{\mathrm{*}}\mathbf{A}\right|}_{\alpha }^{\alpha }}=\frac{\sum_{\beta \in {J}_{r,m}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left((\mathbf{A}{\mathbf{A}}^{\mathrm{*}}{)}_{.i}({\mathbf{a}}_{.j})\right)}_{\beta }^{\beta }}{\sum_{\beta \in {J}_{r,m}} {\left|\mathbf{A}{\mathbf{A}}^{\mathrm{*}}\right|}_{\beta }^{\beta }}. $$

Remark 2.9.

Suppose A H r n × m $ \mathbf{A}\in {\mathbb{H}}_r^{n\times m}$ . By Lemma 2.6 and Remark 2.8, for the η-Hermitian adjoint matrix A η * = ( a ij η * ) $ {\mathbf{A}}^{\eta *}=({a}_{{ij}}^{\eta *})$ and η-skew-Hermitian adjoint matrix - A η * = ( a ij - η * ) $ -{\mathbf{A}}^{\eta *}=({a}_{{ij}}^{-\eta *})$ , determinantal representations of their Moore–Penrose inverses ( A η * ) = ( ( a ij η * ) ) H m × n $ ({\mathbf{A}}^{\eta *}{)}^{\dagger }=\left(({a}_{{ij}}^{\eta *}{)}^{\dagger }\right)\in {\mathbb{H}}^{m\times n}$ and ( - A η * ) = ( ( a ij - η * ) ) $ (-{\mathbf{A}}^{\eta *}{)}^{\dagger }=\left(({a}_{{ij}}^{-\eta *}{)}^{\dagger }\right)$ are respectively

( a ij η * ) = - η ( a ji ) ̅ η = = - η α I r , n { j } rde t j ( ( A * A ) j . ( a i   . ) ) α α β I r , n | A * A | α α η $$ \begin{array}{c}({a}_{{ij}}^{\eta \mathrm{*}}{)}^{\dagger }=-\eta \overline{({a}_{{ji}}{)}^{\dagger }}\eta =\\ =-\eta \frac{\sum_{\alpha \in {I}_{r,n}\left\{j\right\}} \mathrm{rde}{\mathrm{t}}_j{\left({\left({\mathbf{A}}^{\mathrm{*}}\mathbf{A}\right)}_{j.}\left({\mathbf{a}}_{i\enspace.}\right)\right)}_{\alpha }^{\alpha }}{\sum_{\beta \in {I}_{r,n}} {\left|{\mathbf{A}}^{\mathrm{*}}\mathbf{A}\right|}_{\alpha }^{\alpha }}\eta \end{array} $$(6)

= - η β J r , m { i } cde t i ( ( A A * ) . i ( a . j ) ) β β β J r , m | A A * | β β η , $$ =-\eta \frac{\sum_{\beta \in {J}_{r,m}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left((\mathbf{A}{\mathbf{A}}^{\mathrm{*}}{)}_{.i}({\mathbf{a}}_{.j})\right)}_{\beta }^{\beta }}{\sum_{\beta \in {J}_{r,m}} {\left|\mathbf{A}{\mathbf{A}}^{\mathrm{*}}\right|}_{\beta }^{\beta }}\eta, $$(7)

( a ij - η * ) = η ( a ji ) ̅ η = $$ ({a}_{{ij}}^{-\eta \mathrm{*}}{)}^{\dagger }=\eta \overline{({a}_{{ji}}{)}^{\dagger }}\eta = $$

η α I r , n { j } rde t j ( ( A * A ) j . ( a i   . ) ) α α β I r , n | A * A | α α η = η β J r , m { i } cde t i ( ( A A * ) . i ( a . j ) ) β β β J r , m | A A * | β β η . $$ \eta \frac{\sum_{\alpha \in {I}_{r,n}\left\{j\right\}} \mathrm{rde}{\mathrm{t}}_j{\left({\left({\mathbf{A}}^{\mathrm{*}}\mathbf{A}\right)}_{j.}\left({\mathbf{a}}_{i\enspace.}\right)\right)}_{\alpha }^{\alpha }}{\sum_{\beta \in {I}_{r,n}} {\left|{\mathbf{A}}^{\mathrm{*}}\mathbf{A}\right|}_{\alpha }^{\alpha }}\eta =\eta \frac{\sum_{\beta \in {J}_{r,m}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left((\mathbf{A}{\mathbf{A}}^{\mathrm{*}}{)}_{.i}({\mathbf{a}}_{.j})\right)}_{\beta }^{\beta }}{\sum_{\beta \in {J}_{r,m}} {\left|\mathbf{A}{\mathbf{A}}^{\mathrm{*}}\right|}_{\beta }^{\beta }}\eta. $$

Since the projection matrices A A = : Q A = ( q ij ) $ {\mathbf{A}}^{\dagger }\mathbf{A}=:{\mathbf{Q}}_A=\left({q}_{{ij}}\right)$ and A A = : P A = ( p ij ) $ \mathbf{A}{\mathbf{A}}^{\dagger }=:{\mathbf{P}}_A=\left({p}_{{ij}}\right)$ are Hermitian, then q ij = q ji ̅ $ {q}_{{ij}}=\overline{{q}_{{ji}}}$ and p ij = p ji ̅ $ {p}_{{ij}}=\overline{{p}_{{ji}}}$ for all ij. From Theorem 2.6 and Remark 2.8, it follows evidently the corollaries.

Corollary 2.2.

If A H r m × n , $ \mathbf{A}\in {\mathbb{H}}_r^{m\times n},$ then its inducted projection matrices Q A = ( q ij ) n × n $ {\mathbf{Q}}_A={\left({q}_{{ij}}\right)}_{n\times n}$ and P A = ( p ij ) m × m $ {\mathbf{P}}_A={\left({p}_{{ij}}\right)}_{m\times m}$ are determined as follows

q ij = β J r , n { i } cde t i ( ( A * A ) . i ( a ̇ . j ) ) β β β J r , n | A * A | β β = α I r , n { j } rde t j ( ( A * A ) j . ( a ̇ i . ) ) α α α I r , n | A * A | α α , $$ {q}_{{ij}}=\frac{\sum_{\beta \in {J}_{r,n}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left({\left({\mathbf{A}}^{\mathrm{*}}\mathbf{A}\right)}_{.i}\left({\boldsymbol{\dot{{\rm a}}}}_{.j}\right)\right)}_{\beta }^{\beta }}{\sum_{\beta \in {J}_{r,n}} {\left|{\mathbf{A}}^{\mathrm{*}}\mathbf{A}\right|}_{\beta }^{\beta }}=\frac{\sum_{\alpha \in {I}_{r,n}\left\{j\right\}} \mathrm{rde}{\mathrm{t}}_j{\left({\left({\mathbf{A}}^{\mathrm{*}}\mathbf{A}\right)}_{j.}\left({\boldsymbol{\dot{{\rm a}}}}_{i.}\right)\right)}_{\alpha }^{\alpha }}{\sum_{\alpha \in {I}_{r,n}} {\left|{\mathbf{A}}^{\mathrm{*}}\mathbf{A}\right|}_{\alpha }^{\alpha }}, $$(8)

p ij = α I r , m { j } rde t j ( ( A A * ) j . ( a ̈ i . ) ) α α α I r , m | A A * | α α = β J r , m { i } cde t i ( ( A A * ) . i ( a ̈ . j ) ) β β α J r , m | A A * | β β , $$ {p}_{{ij}}=\frac{\sum_{\alpha \in {I}_{r,m}\left\{j\right\}} \mathrm{rde}{\mathrm{t}}_j{\left((\mathbf{A}{\mathbf{A}}^{\mathrm{*}}{)}_{j.}({\boldsymbol{\ddot{{\rm a}}}}_{i.})\right)}_{\alpha }^{\alpha }}{\sum_{\alpha \in {I}_{r,m}} {\left|\mathbf{A}{\mathbf{A}}^{\mathrm{*}}\right|}_{\alpha }^{\alpha }}=\frac{\sum_{\beta \in {J}_{r,m}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left((\mathbf{A}{\mathbf{A}}^{\mathrm{*}}{)}_{.i}({\boldsymbol{\ddot{{\rm a}}}}_{.j})\right)}_{\beta }^{\beta }}{\sum_{\alpha \in {J}_{r,m}} {\left|\mathbf{A}{\mathbf{A}}^{\mathrm{*}}\right|}_{\beta }^{\beta }}, $$(9) where a ̇ . j $ {\boldsymbol{\dot{a}}}_{.j}$ and a ̈ i . $ {\boldsymbol{\ddot{a}}}_{i.}$, a ̇ i . $ {\boldsymbol{\dot{a}}}_{i.}$ and a ̈ . j $ {\boldsymbol{\ddot{a}}}_{.j}$ are the jth columns and ith rows of A * A H n × n $ {\mathbf{A}}^{\mathrm{*}}\mathbf{A}\in {\mathbb{H}}^{n\times n}$ and A A * H m × m $ \mathbf{A}{\mathbf{A}}^{\mathrm{*}}\in {\mathbb{H}}^{m\times m}$, respectively.

Cramer’s rule for the system (2)

The next lemma gives the explicit matrix form of a general solution to the system (1).

Lemma 3.1.

[21] Suppose that A 1 H m × n $ {\mathbf{A}}_1\in {\mathbb{H}}^{m\times n}$ , B 1 H r × s $ {\mathbf{B}}_1\in {\mathbb{H}}^{r\times s}$ , C 1 H m × s $ {\mathbf{C}}_1\in {\mathbb{H}}^{m\times s}$ , A 2 H k × n $ {\mathbf{A}}_2\in {\mathbb{H}}^{k\times n}$ , B 2 H r × p $ {\mathbf{B}}_2\in {\mathbb{H}}^{r\times p}$ , C 2 H k × p $ {\mathbf{C}}_2\in {\mathbb{H}}^{k\times p}$ are known and X H n × r $ X\in {\mathbb{H}}^{n\times r}$ is unknown. Put H = A 2 L A 1 $ \mathrm{H}={\mathbf{A}}_2{\mathbf{L}}_{{A}_1}$ , N = R B 1 B 2 $ \mathbf{N}={\mathbf{R}}_{{B}_1}{\mathbf{B}}_2$ , T = R H A 2 $ \mathbf{T}={\mathbf{R}}_H{\mathbf{A}}_2$ , F = B 2 L N $ \mathbf{F}={\mathbf{B}}_2{\mathbf{L}}_N$ . Then the system (1) is consistent if and only if

A i A i C i B i B i = C i , i = 1,2 ; T [ A 2 X B 2 - A 1 C 1 B 1 ] F = 0 . $$ \begin{array}{ccc}{\mathbf{A}}_i{\mathbf{A}}_i^{\dagger }{\mathbf{C}}_i{\mathbf{B}}_i^{\dagger }{\mathbf{B}}_i={\mathbf{C}}_i,& i=\mathrm{1,2};& \mathbf{T}\left[{\mathbf{A}}_2^{\dagger }\mathbf{X}{\mathbf{B}}_2^{\dagger }-{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1{\mathbf{B}}_1^{\dagger }\right]\mathbf{F}=0.\end{array} $$

In that case, the general solution of (1) can be expressed as

X = A 1 C 1 B 1 + L A 1 H A 2 L T ( A 2 C 2 B 2 - A 1 C 1 B 1 ) B 2 B 2 + T T ( A 2 C 2 B 2 - A 1 C 1 B 1 ) B 2 N R B 1 + L A 1 ( Z - H HZ B 2 B 2 ) - L A 1 H A 2 L T WN B 2 + ( W - T TWN N ) R B 1 $$ \mathbf{X}={\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1{\mathbf{B}}_1^{\dagger }+{\mathbf{L}}_{{A}_1}{\mathbf{H}}^{\dagger }{\mathbf{A}}_2{\mathbf{L}}_T\left({\mathbf{A}}_2^{\dagger }{\mathbf{C}}_2{\mathbf{B}}_2^{\dagger }-{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1{\mathbf{B}}_1^{\dagger }\right){\mathbf{B}}_2{\mathbf{B}}_2^{\dagger }+{\mathbf{T}}^{\dagger }\mathbf{T}\left({\mathbf{A}}_2^{\dagger }{\mathbf{C}}_2{\mathbf{B}}_2^{\dagger }-{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1{\mathbf{B}}_1^{\dagger }\right){\mathbf{B}}_2{\mathbf{N}}^{\dagger }{\mathbf{R}}_{{B}_1}+{\mathbf{L}}_{{A}_1}\left(\mathbf{Z}-{\mathbf{H}}^{\dagger }\mathbf{HZ}{\mathbf{B}}_2{\mathbf{B}}_2^{\dagger }\right)-{\mathbf{L}}_{{A}_1}{\mathbf{H}}^{\dagger }{\mathbf{A}}_2{\mathbf{L}}_T\mathbf{WN}{\mathbf{B}}_2^{\dagger }+\left(\mathbf{W}-{\mathbf{T}}^{\dagger }\mathbf{TWN}{\mathbf{N}}^{\dagger }\right){\mathbf{R}}_{{B}_1} $$(10) where Z and W are arbitrary matrices over H $ \mathbb{H}$ with appropriate sizes.

Some simplification of (10) can be derived due to Lemma 2.2. So, we have,

L A 1 H = L A 1 ( A 2 L A 1 ) = ( A 2 L A 1 ) = H , N R B 1 = ( R B 1 B 2 ) R B 1 = ( R B 1 B 2 ) = N , T T = ( R H A 2 ) R H A 2 = ( R H A 2 ) A 2 = T A 2 , L T = I - T T = I - T A 2 . $$ \begin{array}{c}{\mathbf{L}}_{{A}_1}{\mathbf{H}}^{\dagger }={\mathbf{L}}_{{A}_1}({\mathbf{A}}_2{\mathbf{L}}_{{A}_1}{)}^{\dagger }=({\mathbf{A}}_2{\mathbf{L}}_{{A}_1}{)}^{\dagger }={\mathbf{H}}^{\dagger },\\ {\mathbf{N}}^{\dagger }{\mathbf{R}}_{{B}_1}={\left({\mathbf{R}}_{{B}_1}{\mathbf{B}}_2\right)}^{\dagger }{\mathbf{R}}_{{B}_1}={\left({\mathbf{R}}_{{B}_1}{\mathbf{B}}_2\right)}^{\dagger }={\mathbf{N}}^{\dagger },\\ \begin{array}{c}{\mathbf{T}}^{\dagger }\mathbf{T}={\left({\mathbf{R}}_H{\mathbf{A}}_2\right)}^{\dagger }{\mathbf{R}}_H{\mathbf{A}}_2={\left({\mathbf{R}}_H{\mathbf{A}}_2\right)}^{\dagger }{\mathbf{A}}_2={\mathbf{T}}^{\dagger }{\mathbf{A}}_2,\\ {\mathbf{L}}_T=\mathbf{I}-{\mathbf{T}}^{\dagger }\mathbf{T}=\mathbf{I}-{\mathbf{T}}^{\dagger }{\mathbf{A}}_2.\end{array}\end{array} $$(11)

Substituting (11) in (10), we get

X = A 1 C 1 B 1 + H A 2 ( I - T A 2 ) ( A 2 C 2 B 2 - A 1 C 1 B 1 ) B 2 B 2 + T A 2 ( A 2 C 2 B 2 - A 1 C 1 B 1 )   B 2 N + L A 1 ( Z - H HZ B 2 B 2 ) - H A 2 L T WN B 2 + ( W - T TWN N ) R B 1 = A 1 C 1 B 1 + H C 2 B 2 + H ( A 2 T - I ) A 2 A 1 C 1 B 1 P B 2 - H A 2 T C 2 B 2 + T C 2 N - T A 2 A 1 C 1 B 1 B 2 N + L A 1 ( Z - H HZ B 2 B 2 ) - H A 2 L T WN B 2 + ( W - T TWN N ) R B 1 . $$ \begin{array}{c}\mathbf{X}={\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1{\mathbf{B}}_1^{\dagger }+{\mathbf{H}}^{\dagger }{\mathbf{A}}_2\left(\mathbf{I}-{\mathbf{T}}^{\dagger }{\mathbf{A}}_2\right)\left({\mathbf{A}}_2^{\dagger }{\mathbf{C}}_2{\mathbf{B}}_2^{\dagger }-{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1{\mathbf{B}}_1^{\dagger }\right){\mathbf{B}}_2{\mathbf{B}}_2^{\dagger }+\\ {\mathbf{T}}^{\dagger }{\mathbf{A}}_2\left({\mathbf{A}}_2^{\dagger }{\mathbf{C}}_2{\mathbf{B}}_2^{\dagger }-{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1{\mathbf{B}}_1^{\dagger }\right)\enspace {\mathbf{B}}_2{\mathbf{N}}^{\dagger }+{\mathbf{L}}_{{A}_1}\left(\mathbf{Z}-{\mathbf{H}}^{\dagger }\mathbf{HZ}{\mathbf{B}}_2{\mathbf{B}}_2^{\dagger }\right)-\\ \begin{array}{c}{\mathbf{H}}^{\dagger }{\mathbf{A}}_2{\mathbf{L}}_T\mathbf{WN}{\mathbf{B}}_2^{\dagger }+\left(\mathbf{W}-{\mathbf{T}}^{\dagger }\mathbf{TWN}{\mathbf{N}}^{\dagger }\right){\mathbf{R}}_{{B}_1}=\\ {\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1{\mathbf{B}}_1^{\dagger }+{\mathbf{H}}^{\dagger }{\mathbf{C}}_2{\mathbf{B}}_2^{\dagger }+{\mathbf{H}}^{\dagger }\left({\mathbf{A}}_2{\mathbf{T}}^{\dagger }-\mathbf{I}\right){\mathbf{A}}_2{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1{\mathbf{B}}_1^{\dagger }{\mathbf{P}}_{{B}_2}-\\ \begin{array}{c}{\mathbf{H}}^{\dagger }{\mathbf{A}}_2{\mathbf{T}}^{\dagger }{\mathbf{C}}_2{\mathbf{B}}_2^{\dagger }+{\mathbf{T}}^{\dagger }{\mathbf{C}}_2{\mathbf{N}}^{\dagger }-{\mathbf{T}}^{\dagger }{\mathbf{A}}_2{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1{\mathbf{B}}_1^{\dagger }{\mathbf{B}}_2{\mathbf{N}}^{\dagger }+\\ {\mathbf{L}}_{{A}_1}\left(\mathbf{Z}-{\mathbf{H}}^{\dagger }\mathbf{HZ}{\mathbf{B}}_2{\mathbf{B}}_2^{\dagger }\right)-{\mathbf{H}}^{\dagger }{\mathbf{A}}_2{\mathbf{L}}_T\mathbf{WN}{\mathbf{B}}_2^{\dagger }+\left(\mathbf{W}-{\mathbf{T}}^{\dagger }\mathbf{TWN}{\mathbf{N}}^{\dagger }\right){\mathbf{R}}_{{B}_1}.\end{array}\end{array}\end{array} $$

By putting Z = W = 0, we get the following expression of the partial solution

X 0 = A 1 C 1 B 1 + H C 2 B 2 + T C 2 N + H A 2 T A 2 A 1 C 1 B 1 P B 2 - H A 2 A 1 C 1 B 1 P B 2 - H A 2 T C 2 B 2 - T A 2 A 1 C 1 B 1 B 2 N . $$ {\mathbf{X}}_0={\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1{\mathbf{B}}_1^{\dagger }+{\mathbf{H}}^{\dagger }{\mathbf{C}}_2{\mathbf{B}}_2^{\dagger }+{\mathbf{T}}^{\dagger }{\mathbf{C}}_2{\mathbf{N}}^{\dagger }+{\mathbf{H}}^{\dagger }{\mathbf{A}}_2{\mathbf{T}}^{\dagger }{\mathbf{A}}_2{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1{\mathbf{B}}_1^{\dagger }{\mathbf{P}}_{{B}_2}-{\mathbf{H}}^{\dagger }{\mathbf{A}}_2{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1{\mathbf{B}}_1^{\dagger }{\mathbf{P}}_{{B}_2}-{\mathbf{H}}^{\dagger }{\mathbf{A}}_2{\mathbf{T}}^{\dagger }{\mathbf{C}}_2{\mathbf{B}}_2^{\dagger }-{\mathbf{T}}^{\dagger }{\mathbf{A}}_2{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1{\mathbf{B}}_1^{\dagger }{\mathbf{B}}_2{\mathbf{N}}^{\dagger }. $$(12)

Now consider the system (2). We have

Q A i η * = ( A i η * ) A i η * = ( A i A i ) η * = P A i η , $$ {\mathbf{Q}}_{{A}_i^{\eta \mathrm{*}}}={\left({\mathbf{A}}_i^{\eta \mathrm{*}}\right)}^{\dagger }{\mathbf{A}}_i^{\eta \mathrm{*}}={\left({\mathbf{A}}_i{\mathbf{A}}_i^{\dagger }\right)}^{\eta \mathrm{*}}={\mathbf{P}}_{{A}_i}^{\eta }, $$similarly, P A i η * = Q A i η $ {\mathbf{P}}_{{A}_i^{\eta \mathrm{*}}}={\mathbf{Q}}_{{A}_i}^{\eta }$, and, by Lemma 2.1, L A i η * = R A i η $ {\mathbf{L}}_{{A}_i^{\eta \mathrm{*}}}={\mathbf{R}}_{{A}_i}^{\eta }$, and R A i η * = L A i η $ {\mathbf{R}}_{{A}_i^{\eta \mathrm{*}}}={\mathbf{L}}_{{A}_i}^{\eta }$ for i = 1, 2. Moreover, by substituting B i = A i η * $ {\mathbf{B}}_i={\mathbf{A}}_i^{\eta \mathrm{*}}$, we obtain

N = R A 1 η * A 2 η * = ( A 2 L A 1 ) η * = H η * , F = A 2 η * L H η * = ( R H A 2 ) η * = T η * . $$ \begin{array}{c}\mathbf{N}={\mathbf{R}}_{{A}_1^{\eta \mathrm{*}}}{\mathbf{A}}_2^{\eta \mathrm{*}}={\left({\mathbf{A}}_2{\mathbf{L}}_{{A}_1}\right)}^{\eta \mathrm{*}}={\mathbf{H}}^{\eta \mathrm{*}},\\ \mathbf{F}={\mathbf{A}}_2^{\eta \mathrm{*}}{\mathbf{L}}_{{H}^{\eta \mathrm{*}}}={\left({\mathbf{R}}_H{\mathbf{A}}_2\right)}^{\eta \mathrm{*}}={\mathbf{T}}^{\eta \mathrm{*}}.\end{array} $$

From above, it follows the next analog of Lemma 3.1.

Lemma 3.2.

Suppose that A 1 H m × n $ {\mathbf{A}}_1\in {\mathbb{H}}^{m\times n}$ , A 2 H k × n $ {\mathbf{A}}_2\in {\mathbb{H}}^{k\times n}$ , C 1 H m × m $ {\mathbf{C}}_1\in {\mathbb{H}}^{m\times m}$ , C 2 H k × k $ {\mathbf{C}}_2\in {\mathbb{H}}^{k\times k}$ are known and X H n × n $ \mathbf{X}\in {\mathbb{H}}^{n\times n}$ is unknown. The system (2) is consistent if and only if

P A i C i P A i η = C i , i = 1 ,   2 ; $$ \begin{array}{cc}{\mathbf{P}}_{{A}_i}{\mathbf{C}}_i{\mathbf{P}}_{{A}_i}^{\eta }={\mathbf{C}}_i,& i=1,\enspace 2;\end{array} $$(13)

T [ A 2 C 2 ( A 2 η * ) - A 1 C 1 ( A 1 η * ) ] T η * = 0 . $$ \mathbf{T}\left[{\mathbf{A}}_2^{\dagger }{\mathbf{C}}_2({\mathbf{A}}_2^{\eta \mathrm{*}}{)}^{\dagger }-{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1({\mathbf{A}}_1^{\eta \mathrm{*}}{)}^{\dagger }\right]{\mathbf{T}}^{\eta \mathrm{*}}=0. $$(14)

In that case, the general solution to (2) is expressed as

X = A 1 C 1 ( A 1 η * ) + H C 2 ( A 2 η * ) + H ( A 2 T - I ) A 2 A 1 C 1 ( A 1 η * ) Q A 2 - H A 2 T C 2 ( A 2 η * ) + T C 2 ( H η * ) - T A 2 A 1 C 1 ( A 1 η * ) A 2 η * ( H η * ) + L A 1 ( Z - H HZ A 2 η * ( A 2 η * ) ) - H A 2 L T W H η * ( A 2 η * ) + ( W - T TW H η * ( H η * ) ) L A 1 . $$ \mathbf{X}={\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1{\left({\mathbf{A}}_1^{\eta \mathrm{*}}\right)}^{\dagger }+{\mathbf{H}}^{\dagger }{\mathbf{C}}_2({\mathbf{A}}_2^{\eta \mathrm{*}}{)}^{\dagger }+{\mathbf{H}}^{\dagger }\left({\mathbf{A}}_2{\mathbf{T}}^{\dagger }-\mathbf{I}\right){\mathbf{A}}_2{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1({\mathbf{A}}_1^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{Q}}_{{A}_2}-{\mathbf{H}}^{\dagger }{\mathbf{A}}_2{\mathbf{T}}^{\dagger }{\mathbf{C}}_2({\mathbf{A}}_2^{\eta \mathrm{*}}{)}^{\dagger }+{\mathbf{T}}^{\dagger }{\mathbf{C}}_2({\mathbf{H}}^{\eta \mathrm{*}}{)}^{\dagger }-{\mathbf{T}}^{\dagger }{\mathbf{A}}_2{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1({\mathbf{A}}_1^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{A}}_2^{\eta \mathrm{*}}({\mathbf{H}}^{\eta \mathrm{*}}{)}^{\dagger }+{\mathbf{L}}_{{A}_1}\left(\mathbf{Z}-{\mathbf{H}}^{\dagger }\mathbf{HZ}{\mathbf{A}}_2^{\eta \mathrm{*}}({\mathbf{A}}_2^{\eta \mathrm{*}}{)}^{\dagger }\right)-{\mathbf{H}}^{\dagger }{\mathbf{A}}_2{\mathbf{L}}_T\mathbf{W}{\mathbf{H}}^{\eta \mathrm{*}}({\mathbf{A}}_2^{\eta \mathrm{*}}{)}^{\dagger }+\left(\mathbf{W}-{\mathbf{T}}^{\dagger }\mathbf{TW}{\mathbf{H}}^{\eta \mathrm{*}}({\mathbf{H}}^{\eta \mathrm{*}}{)}^{\dagger }\right){\mathbf{L}}_{{A}_1}. $$ where Z and W are arbitrary matrices over H $ \mathbb{H}$ with appropriate sizes.

By putting Z, W as zero-matrices, the partial solution to (2) is

X = A 1 C 1 ( A 1 η * ) + H C 2 ( A 2 η * ) + T C 2 ( H η * ) + H A 2 T A 2 A 1 C 1 ( A 1 η * ) Q A 2 - H A 2 A 1 C 1 ( A 1 η * ) Q A 2 - H A 2 T C 2 ( A 2 η * ) - T A 2 A 1 C 1 ( A 1 η * ) A 2 η * ( H η * ) . $$ \mathbf{X}={\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1{\left({\mathbf{A}}_1^{\eta \mathrm{*}}\right)}^{\dagger }+{\mathbf{H}}^{\dagger }{\mathbf{C}}_2({\mathbf{A}}_2^{\eta \mathrm{*}}{)}^{\dagger }+{\mathbf{T}}^{\dagger }{\mathbf{C}}_2({\mathbf{H}}^{\eta \mathrm{*}}{)}^{\dagger }+{\mathbf{H}}^{\dagger }{\mathbf{A}}_2{\mathbf{T}}^{\dagger }{\mathbf{A}}_2{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1({\mathbf{A}}_1^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{Q}}_{{A}_2}-{\mathbf{H}}^{\dagger }{\mathbf{A}}_2{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1({\mathbf{A}}_1^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{Q}}_{{A}_2}-{\mathbf{H}}^{\dagger }{\mathbf{A}}_2{\mathbf{T}}^{\dagger }{\mathbf{C}}_2({\mathbf{A}}_2^{\eta \mathrm{*}}{)}^{\dagger }-{\mathbf{T}}^{\dagger }{\mathbf{A}}_2{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1({\mathbf{A}}_1^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{A}}_2^{\eta \mathrm{*}}({\mathbf{H}}^{\eta \mathrm{*}}{)}^{\dagger }. $$(15)

Further, we give determinantal representations of (15).

Suppose that A 1 H r 1 m × n $ {\mathbf{A}}_1\in {\mathbb{H}}_{{r}_1}^{m\times n}$, A 2 H r 2 k × n $ {\mathbf{A}}_2\in {\mathbb{H}}_{{r}_2}^{k\times n}$, C 1 = ( c ij ( 1 ) ) H m × m $ {\mathbf{C}}_1=\left({c}_{{ij}}^{(1)}\right)\in {\mathbb{H}}^{m\times m}$, C 2 = ( c ij ( 2 ) ) H k × k $ {\mathbf{C}}_2=\left({c}_{{ij}}^{(2)}\right)\in {\mathbb{H}}^{k\times k}$, rank H = r 3 $ \mathrm{rank}\mathbf{H}={r}_3$, and rank T = r 4 $ \mathrm{rank}\mathbf{T}={r}_4$. So, A 1 = ( a ij ( 1 ) , ) H n × m $ {\mathbf{A}}_1^{\dagger }=\left({a}_{{ij}}^{(1),\dagger }\right)\in {\mathbb{H}}^{n\times m}$, ( A 1 η * ) = ( a ij ( 1 ) , η * , ) H m × n $ ({\mathbf{A}}_1^{\eta \mathrm{*}}{)}^{\dagger }=\left({a}_{{ij}}^{(1),\eta \mathrm{*},\dagger }\right)\in {\mathbb{H}}^{m\times n}$, A 2 = ( a ij ( 2 ) , ) H k × n $ {\mathbf{A}}_2^{\dagger }=\left({a}_{{ij}}^{(2),\dagger }\right)\in {\mathbb{H}}^{k\times n}$, ( A 2 η * ) = ( a ij ( 2 ) , η * , ) H n × k $ ({\mathbf{A}}_2^{\eta \mathrm{*}}{)}^{\dagger }=\left({a}_{{ij}}^{(2),\eta \mathrm{*},\dagger }\right)\in {\mathbb{H}}^{n\times k}$, H = ( h ij ) H n × k $ {\mathbf{H}}^{\dagger }=\left({h}_{{ij}}^{\dagger }\right)\in {\mathbb{H}}^{n\times k}$, and T = ( t ij ) H n × k $ {\mathbf{T}}^{\dagger }=\left({t}_{{ij}}^{\dagger }\right)\in {\mathbb{H}}^{n\times k}$.

Consider each summand of (15) separately.

  1. Denote C 11 A 1 * C 1 A 1 η $ {\mathbf{C}}_{11}:={\mathbf{A}}_1^{\mathrm{*}}{\mathbf{C}}_1{\mathbf{A}}_1^{\eta }$. For the first term of (15) X 1 = A 1 C 1 ( A 1 η * ) = ( x ij ( 1 ) ) $ {\mathbf{X}}_1={\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1{\left({\mathbf{A}}_1^{\eta \mathrm{*}}\right)}^{\dagger }=({x}_{{ij}}^{(1)})$, we have

x ij ( 1 ) = l = 1 m t = 1 m a il ( 1 ) , c lt ( 1 ) a tj ( 1 ) , η * , . $$ {x}_{{ij}}^{(1)}=\sum_{l=1}^m \sum_{t=1}^m {a}_{{il}}^{(1),\dagger }{c}_{{lt}}^{(1)}{a}_{{tj}}^{(1){,}^{\eta \mathrm{*}},\dagger }. $$

Taking into account (4) and (6) for A 1 $ {\mathbf{A}}_1^{\dagger }$ and ( A 1 η * ) $ {\left({\mathbf{A}}_1^{\eta \mathrm{*}}\right)}^{\dagger }$, respectively, we get

x ij ( 1 ) = l = 1 m t = 1 m β J r 1 , n { i } cde t i ( ( A 1 * A 1 ) .   i ( a . l ( 1 ) , * ) ) β β c lt ( 1 ) ( - η α I r 1 , n { j } rde t j ( ( A 1 * A 1 ) j . ( a t . ( 1 ) ) ) α α η ) α I r 1 , n | A 1 * A 1 | α α β J r 1 , n | A 1 * A 1 | β β . $$ {x}_{{ij}}^{(1)}=\frac{\sum_{l=1}^m \sum_{t=1}^m \sum_{\beta \in {J}_{{r}_1,n}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left({\left({\mathbf{A}}_1^{\mathrm{*}}{\mathbf{A}}_1\right)}_{.\enspace i}\left({\mathbf{a}}_{.l}^{(1),\mathrm{*}}\right)\right)}_{\beta }^{\beta }{c}_{{lt}}^{(1)}\left(-\eta \sum_{\alpha \in {I}_{{r}_1,n}\left\{j\right\}} \mathrm{rde}{\mathrm{t}}_j{\left(({\mathbf{A}}_1^{\mathrm{*}}{\mathbf{A}}_1{)}_{j.}({\mathbf{a}}_{t.}^{(1)})\right)}_{\alpha }^{\alpha }\eta \right)}{\sum_{\alpha \in {I}_{{r}_1,n}} {\left|{\mathbf{A}}_1^{\mathrm{*}}{\mathbf{A}}_1\right|}_{\alpha }^{\alpha }\sum_{\beta \in {J}_{{r}_1,n}} {\left|{\mathbf{A}}_1^{\mathrm{*}}{\mathbf{A}}_1\right|}_{\beta }^{\beta }}. $$

Suppose that e l . $ {e}_{l.}$ and e . l $ {e}_{.l}$ are the unit row and column vectors such that all their components are 0 except the lth components which are 1.

Since l = 1 m t = 1 m a fl ( 1 ) , * c lt ( 1 ) a ts ( 1 ) , η = c fs ( 11 ) , $ \sum_{l=1}^m \sum_{t=1}^m {a}_{{fl}}^{(1),\mathrm{*}}{c}_{{lt}}^{(1)}{a}_{{ts}}^{(1),\eta }={c}_{{fs}}^{(11)},$ then

x ij ( 1 ) = f = 1 n s = 1 n β J r 1 , n { i } cde t i ( ( A 1 * A 1 ) .   i ( e . f ) ) β β   c fs ( 11 ) ( - η α I r 1 , n { j } rde t j ( ( A 1 * A 1 ) j . ( e s . ) ) α α η ) α I r 1 , n | A 1 * A 1 | α α β J r 1 , n | A 1 * A 1 | β β . $$ {x}_{{ij}}^{(1)}=\frac{\sum_{f=1}^n \sum_{s=1}^n \sum_{\beta \in {J}_{{r}_1,n}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left({\left({\mathbf{A}}_1^{\mathrm{*}}{\mathbf{A}}_1\right)}_{.\enspace i}\left({\mathbf{e}}_{.f}\right)\right)}_{\beta }^{\beta }\enspace {c}_{{fs}}^{(11)}\left(-\eta \sum_{\alpha \in {I}_{{r}_1,n}\left\{j\right\}} \mathrm{rde}{\mathrm{t}}_j{\left(({\mathbf{A}}_1^{\mathrm{*}}{\mathbf{A}}_1{)}_{j.}({\mathbf{e}}_{s.})\right)}_{\alpha }^{\alpha }\eta \right)}{\sum_{\alpha \in {I}_{{r}_1,n}} {\left|{\mathbf{A}}_1^{\mathrm{*}}{\mathbf{A}}_1\right|}_{\alpha }^{\alpha }\sum_{\beta \in {J}_{{r}_1,n}} {\left|{\mathbf{A}}_1^{\mathrm{*}}{\mathbf{A}}_1\right|}_{\beta }^{\beta }}. $$(16)

By

v is ( 1 ) f = 1 n β J r 1 , n { i } cde t i ( ( A 1 * A 1 ) . i ( e . f ) ) β β c fs ( 11 ) = β J r 1 , n { i } cde t i ( ( A 1 * A 1 ) . i ( c . s ( 11 ) ) ) β β , $$ {v}_{{is}}^{(1)}:=\sum_{f=1}^n \sum_{\beta \in {J}_{{r}_1,n}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left({\left({\mathbf{A}}_1^{\mathrm{*}}{\mathbf{A}}_1\right)}_{.i}\left({\mathbf{e}}_{.f}\right)\right)}_{\beta }^{\beta }{c}_{{fs}}^{(11)}=\sum_{\beta \in {J}_{{r}_1,n}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left({\left({\mathbf{A}}_1^{\mathrm{*}}{\mathbf{A}}_1\right)}_{.i}\left({\mathbf{c}}_{.s}^{(11)}\right)\right)}_{\beta }^{\beta }, $$(17)denote the sth component of a row-vector v i . ( 1 ) = [ v i 1 ( 1 ) , , v in ( 1 ) ] $ {v}_{i.}^{(1)}=\left[{v}_{i1}^{(1)},\dots,{v}_{{in}}^{(1)}\right]$. Then

s = 1 m v is ( 1 ) ( - η α I r 1 , n { j } rde t j ( ( A 1 * A 1 ) j . ( e s . ) ) α α η ) = - η α I r 1 , n { j } rde t j ( ( A 1 * A 1 ) j . ( v i . ( 1 ) , η ) ) α α η . $$ \sum_{s=1}^m {v}_{{is}}^{(1)}\left(-\eta \sum_{\alpha \in {I}_{{r}_1,n}\left\{j\right\}} \mathrm{rde}{\mathrm{t}}_j{\left(({\mathbf{A}}_1^{\mathrm{*}}{\mathbf{A}}_1{)}_{j.}({\mathbf{e}}_{s.})\right)}_{\alpha }^{\alpha }\eta \right)=-\eta \sum_{\alpha \in {I}_{{r}_1,n}\left\{j\right\}} \mathrm{rde}{\mathrm{t}}_j{\left(({\mathbf{A}}_1^{\mathrm{*}}{\mathbf{A}}_1{)}_{j.}({\mathbf{v}}_{i.}^{(1),\eta })\right)}_{\alpha }^{\alpha }\eta. $$(18)

Farther, it is evident that β J r 1 , n | A 1 * A 1 | β β = α I r 1 , n | A 1 * A 1 | α α $ \sum_{\beta \in {J}_{{r}_1,n}} {\left|{\mathbf{A}}_1^{\mathrm{*}}{\mathbf{A}}_1\right|}_{\beta }^{\beta }=\sum_{\alpha \in {I}_{{r}_1,n}} {\left|{\mathbf{A}}_1^{\mathrm{*}}{\mathbf{A}}_1\right|}_{\alpha }^{\alpha }$. Integrating (17) and (18) in (16), the determinantal representation of the first term of (15) can be expressed as

x ij ( 1 ) = - η α I r 1 , n { j } rde t j ( ( A 1 * A 1 ) j . ( v i . ( 1 ) , η ) ) α α η ( α I r 1 , n | A 1 * A 1 | α α ) 2 $$ {x}_{{ij}}^{(1)}=\frac{-\eta \sum_{\alpha \in {I}_{{r}_1,n}\left\{j\right\}} \mathrm{rde}{\mathrm{t}}_j{\left({\left({\mathbf{A}}_1^{{*}}{\mathbf{A}}_1\right)}_{j.}\left({\mathbf{v}}_{i.}^{(1),\eta }\right)\right)}_{\alpha }^{\alpha }\eta }{{\left(\sum_{\alpha \in {I}_{{r}_1,n}} {\left|{\mathbf{A}}_1^{\mathrm{*}}{\mathbf{A}}_1\right|}_{\alpha }^{\alpha }\right)}^2} $$(19)where

v i . ( 1 ) , η = [ - η β J r 1 , n { i } cde t i ( ( A 1 * A 1 ) . i ( c . s ( 11 ) ) ) β β η ] H 1 × n , s = 1 , , n .   $$ \begin{array}{cc}{\mathbf{v}}_{i.}^{(1),\eta }=\left[-\eta \sum_{\beta \in {J}_{{r}_1,n}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left({\left({\mathbf{A}}_1^{\mathrm{*}}{\mathbf{A}}_1\right)}_{.i}\left({\mathbf{c}}_{.s}^{(11)}\right)\right)}_{\beta }^{\beta }\eta \right]\in {\mathbb{H}}^{1\times n},& s=1,\dots,n.\end{array}\enspace $$(20)

If we denote by

v fj ( 2 ) s = 1 n c fs ( 11 ) ( - η α I r 1 , n { j } rde t j ( ( A 1 * A 1 ) j . ( e s . ) ) α α η ) = - η α I r 1 , n { j } rde t j ( ( A 1 * A 1 ) j . ( c f . ( 11 ) , η ) ) α α η , $$ {v}_{{fj}}^{(2)}:=\sum_{s=1}^n {c}_{{fs}}^{(11)}\left(-\eta \sum_{\alpha \in {I}_{{r}_1,n}\left\{j\right\}} \mathrm{rde}{\mathrm{t}}_j{\left(({\mathbf{A}}_1^{\mathrm{*}}{\mathbf{A}}_1{)}_{j.}({\mathbf{e}}_{s.})\right)}_{\alpha }^{\alpha }\eta \right)=-\eta \sum_{\alpha \in {I}_{{r}_1,n}\left\{j\right\}} \mathrm{rde}{\mathrm{t}}_j{\left(({\mathbf{A}}_1^{\mathrm{*}}{\mathbf{A}}_1{)}_{j.}({\mathbf{c}}_{f.}^{(11),\eta })\right)}_{\alpha }^{\alpha }\eta, $$(21)the fth component of a column-vector v . j ( 2 ) = [ v 1 j ( 2 ) , , v nj ( 2 ) ] $ {\mathbf{v}}_{.j}^{(2)}=\left[{v}_{1j}^{(2)},\dots,{v}_{{nj}}^{(2)}\right]$, then

f = 1 n β J r 1 , n { i } cde t i ( ( A 1 * A 1 ) .   i ( e . f ) ) β β   v fj ( 2 ) = β J r 1 , n { i } cde t i ( ( A 1 * A 1 ) .   i ( v . j ( 2 ) ) ) β β . $$ \sum_{f=1}^n \sum_{\beta \in {J}_{{r}_1,n}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left({\left({\mathbf{A}}_1^{\mathrm{*}}{\mathbf{A}}_1\right)}_{.\enspace i}\left({\mathbf{e}}_{.f}\right)\right)}_{\beta }^{\beta }\enspace {\mathbf{v}}_{{fj}}^{(2)}=\sum_{\beta \in {J}_{{r}_1,n}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left({\left({\mathbf{A}}_1^{\mathrm{*}}{\mathbf{A}}_1\right)}_{.\enspace i}\left({\mathbf{v}}_{.j}^{(2)}\right)\right)}_{\beta }^{\beta }. $$(22)

Integrating (21) and (22) in (16), we obtain another determinantal representation of the first term

x ij ( 1 ) = β J r 1 , n { i } cde t i ( ( A 1 * A 1 ) . i ( v . j ( 2 ) ) ) β β ( β J r 1 , n | A 1 * A 1 | β β ) 2 , $$ {x}_{{ij}}^{(1)}=\frac{\sum_{\beta \in {J}_{{r}_1,n}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left({\left({\mathbf{A}}_1^{\mathrm{*}}{\mathbf{A}}_1\right)}_{.i}\left({\mathbf{v}}_{.j}^{(2)}\right)\right)}_{\beta }^{\beta }}{{\left(\sum_{\beta \in {J}_{{r}_1,n}} {\left|{\mathbf{A}}_1^{\mathrm{*}}{\mathbf{A}}_1\right|}_{\beta }^{\beta }\right)}^2}, $$(23)where

v . j ( 2 ) = [ - η α I r 1 , n { j } rde t j ( ( A 1 * A 1 ) j . ( c f . ( 11 ) , η ) ) α α η ] H n × 1 ,   f = 1 , , n , $$ \begin{array}{cc}{\mathbf{v}}_{.j}^{(2)}=\left[-\eta \sum_{\alpha \in {I}_{{r}_1,n}\left\{j\right\}} \mathrm{rde}{\mathrm{t}}_j{\left(({\mathbf{A}}_1^{\mathrm{*}}{\mathbf{A}}_1{)}_{j.}({\mathbf{c}}_{f.}^{(11),\eta })\right)}_{\alpha }^{\alpha }\eta \right]\in {\mathbb{H}}^{n\times 1},\enspace & f=1,\dots,n,\end{array} $$are the column vector and c f . ( 11 ) , η $ {\mathbf{c}}_{f.}^{(11),\eta }$ is the fth row of C 11 η = A 1 η * C 1 η A 1 $ {\mathbf{C}}_{11}^{\eta }={\mathbf{A}}_1^{\eta \mathrm{*}}{\mathbf{C}}_1^{\eta }{\mathbf{A}}_1$.

  1. Similarly above, for the second term X 2 = H C 2 ( A 2 η * ) = ( x ij ( 2 ) ) $ {\mathbf{X}}_2={\mathbf{H}}^{\dagger }{\mathbf{C}}_2({\mathbf{A}}_2^{\eta \mathrm{*}}{)}^{\dagger }=\left({x}_{{ij}}^{(2)}\right)$ of (15),

we have

x ij ( 2 ) = β J r 3 , n { i } cde t i ( ( H * H ) . i ( d . j A 2 ) ) β β β J r 3 , n | H * H | β β α I r 2 , n | A 2 * A 2 | α α , $$ {x}_{{ij}}^{(2)}=\frac{\sum_{\beta \in {J}_{{r}_3,n}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left({\left({\mathbf{H}}^{\mathrm{*}}\mathbf{H}\right)}_{.i}\left({\mathbf{d}}_{.j}^{{A}_2}\right)\right)}_{\beta }^{\beta }}{\sum_{\beta \in {J}_{{r}_3,n}} {\left|{\mathbf{H}}^{\mathrm{*}}\mathbf{H}\right|}_{\beta }^{\beta }\sum_{\alpha \in {I}_{{r}_2,n}} {\left|{\mathbf{A}}_2^{\mathrm{*}}{\mathbf{A}}_2\right|}_{\alpha }^{\alpha }}, $$(24)or

x ij ( 2 ) = - η α I r 2 , n { j } rde t j ( ( A 2 * A 2 ) j . ( d i . H ) ) α α η β J r 3 , n | H * H | β β α I r 2 , n | A 2 * A 2 | α α , $$ {x}_{{ij}}^{(2)}=\frac{-\eta \sum_{\alpha \in {I}_{{r}_2,n}\left\{j\right\}} \mathrm{rde}{\mathrm{t}}_j{\left({\left({\mathbf{A}}_2^{\mathrm{*}}{\mathbf{A}}_2\right)}_{j.}\left({\mathbf{d}}_{i.}^H\right)\right)}_{\alpha }^{\alpha }\eta }{\sum_{\beta \in {J}_{{r}_3,n}} {\left|{\mathbf{H}}^{\mathrm{*}}\mathbf{H}\right|}_{\beta }^{\beta }\sum_{\alpha \in {I}_{{r}_2,n}} {\left|{\mathbf{A}}_2^{\mathrm{*}}{\mathbf{A}}_2\right|}_{\alpha }^{\alpha }}, $$(25)where

d . j A 2 = [ - η α I r 2 , n { j } rde t j ( ( A 2 * A 2 ) j . ( c q . ( 21 ) , η ) ) α α η ] H n × 1 ,   q = 1 , , n , d i . H = [ - η β J r 3 , n { i } cde t i ( ( H * H ) . i ( c . l ( 21 ) ) ) β β η ] H 1 × n , l = 1 , , n . $$ \begin{array}{c}\begin{array}{cc}{\mathbf{d}}_{.j}^{{A}_2}=\left[-\eta \sum_{\alpha \in {I}_{{r}_2,n}\left\{j\right\}} \mathrm{rde}{\mathrm{t}}_j{\left({\left({\mathbf{A}}_2^{\mathrm{*}}{\mathbf{A}}_2\right)}_{j.}\left({\mathbf{c}}_{q.}^{(21),\eta }\right)\right)}_{\alpha }^{\alpha }\eta \right]\in {\mathbb{H}}^{n\times 1},\enspace & q=1,\dots,n,\end{array}\\ \begin{array}{cc}{\mathbf{d}}_{i.}^H=\left[-\eta \sum_{\beta \in {J}_{{r}_3,n}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left({\left({\mathbf{H}}^{\mathrm{*}}\mathbf{H}\right)}_{.i}\left({\mathbf{c}}_{.l}^{(21)}\right)\right)}_{\beta }^{\beta }\eta \right]\in {\mathbb{H}}^{1\times n},& l=1,\dots,n\end{array}.\end{array} $$Here c q . ( 21 ) $ {\mathbf{c}}_{q.}^{(21)}$ and c . l ( 21 ) $ {\mathbf{c}}_{.l}^{(21)}$ are the qth row and the lth column of C 21 = H * C 2 A 2 η $ {\mathbf{C}}_{21}={\mathbf{H}}^{\mathrm{*}}{\mathbf{C}}_2{\mathbf{A}}_2^{\eta }$.

  1. The third term X 3 = T C 2 ( H η * ) = ( x ij ( 3 ) ) $ {\mathbf{X}}_3={\mathbf{T}}^{\dagger }{\mathbf{C}}_2({\mathbf{H}}^{\eta \mathrm{*}}{)}^{\dagger }=\left({x}_{{ij}}^{(3)}\right)$ of (15) can be obtained similarly as well. So,

x ij ( 3 ) = β J r 4 , n { i } cde t i ( ( T * T ) . i ( d . j H ) ) β β β J r 4 , n | T * T | β β α I r 3 , n | H * H | α α , $$ {x}_{{ij}}^{(3)}=\frac{\sum_{\beta \in {J}_{{r}_4,n}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left({\left({\mathbf{T}}^{\mathrm{*}}\mathbf{T}\right)}_{.i}\left({\mathbf{d}}_{.j}^H\right)\right)}_{\beta }^{\beta }}{\sum_{\beta \in {J}_{{r}_4,n}} {\left|{\mathbf{T}}^{\mathrm{*}}\mathbf{T}\right|}_{\beta }^{\beta }\sum_{\alpha \in {I}_{{r}_3,n}} {\left|{\mathbf{H}}^{\mathrm{*}}\mathbf{H}\right|}_{\alpha }^{\alpha }}, $$(26)or

x ij ( 3 ) = - η α I r 3 , r { j } rde t j ( ( H * H ) j . ( d i . T ) ) α α η β J r 4 , n | T * T | β β α I r 3 , n | H * H | α α , $$ {x}_{{ij}}^{(3)}=\frac{-\eta \sum_{\alpha \in {I}_{{r}_3,r}\left\{j\right\}} \mathrm{rde}{\mathrm{t}}_j{\left({\left({\mathbf{H}}^{\mathrm{*}}\mathbf{H}\right)}_{j.}\left({\mathbf{d}}_{i.}^T\right)\right)}_{\alpha }^{\alpha }\eta }{\sum_{\beta \in {J}_{{r}_4,n}} {\left|{\mathbf{T}}^{\mathrm{*}}\mathbf{T}\right|}_{\beta }^{\beta }\sum_{\alpha \in {I}_{{r}_3,n}} {\left|{\mathbf{H}}^{\mathrm{*}}\mathbf{H}\right|}_{\alpha }^{\alpha }}, $$(27)where

d . j H = [ - η α I r 3 , n { f } rde t j ( ( H * H ) j . ( c q . ( 22 ) , η ) ) α α η ] H n × 1 ,   q = 1 , , n , d i . T = [ - η β J r 4 , n { i } cde t i ( ( T * T ) . i ( c . l ( 22 ) ) ) β β η ] H 1 × n ,   l = 1 , , n . $$ \begin{array}{c}\begin{array}{cc}{\mathbf{d}}_{.j}^H=\left[-\eta \sum_{\alpha \in {I}_{{r}_3,n}\left\{f\right\}} \mathrm{rde}{\mathrm{t}}_j{\left({\left({\mathbf{H}}^{\mathrm{*}}\mathbf{H}\right)}_{j.}\left({\mathbf{c}}_{q.}^{(22),\eta }\right)\right)}_{\alpha }^{\alpha }\eta \right]\in {\mathbb{H}}^{n\times 1},\enspace & q=1,\dots,n,\end{array}\\ \begin{array}{cc}{\mathbf{d}}_{i.}^T=\left[-\eta \sum_{\beta \in {J}_{{r}_4,n}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left({\left({\mathbf{T}}^{\mathrm{*}}\mathbf{T}\right)}_{.i}\left({\mathbf{c}}_{.l}^{(22)}\right)\right)}_{\beta }^{\beta }\eta \right]\in {\mathbb{H}}^{1\times n},\enspace & l=1,\dots,n.\end{array}\end{array} $$Here c q . ( 22 ) $ {c}_{q.}^{(22)}$, c . l ( 22 ) $ {c}_{.l}^{(22)}$ are the qth row and the lth column of C 22 = T * C 2 H η $ {\mathbf{C}}_{22}={\mathbf{T}}^{\mathrm{*}}{\mathbf{C}}_2{\mathbf{H}}^{\eta }$.

  1. Now, consider the fourth term X 4 = H A 2 T A 2 A 1 C 1 ( A 1 η * ) Q A 2 = ( x ij ( 4 ) ) $ {\mathbf{X}}_4={\mathbf{H}}^{\dagger }{\mathbf{A}}_2{\mathbf{T}}^{\dagger }{\mathbf{A}}_2{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1({\mathbf{A}}_1^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{Q}}_{{A}_2}=\left({x}_{{ij}}^{(4)}\right)$ of (15). Taking into account (4) for determinantal representations of H and T , we get

x ij ( 4 ) = s = 1 n z = 1 n f = 1 n β J r 3 , n { i } cde t i ( ( H * H ) . i ( a . s ( 2 , H ) ) ) β β β J r 4 , n { s } cde t s ( ( T * T ) . s ( a . z ( 2 , T ) ) ) β β   x zf ( 1 )   q fj β J r 3 , n | H * H | β β β J r 4 , n | T * T | β β , $$ {x}_{{ij}}^{(4)}=\frac{\sum_{s=1}^n \sum_{z=1}^n \sum_{f=1}^n \sum_{\beta \in {J}_{{r}_3,n}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left({\left({\mathbf{H}}^{\mathbf{*}}\mathbf{H}\right)}_{.i}\left({\mathbf{a}}_{.s}^{(2,H)}\right)\right)}_{\beta }^{\beta }\sum_{\beta \in {J}_{{r}_4,n}\left\{s\right\}} \mathrm{cde}{\mathrm{t}}_s{\left({\left({\mathbf{T}}^{\mathrm{*}}\mathbf{T}\right)}_{.s}\left({\mathbf{a}}_{.z}^{(2,T)}\right)\right)}_{\beta }^{\beta }\enspace {x}_{{zf}}^{(1)}\enspace {q}_{{fj}}}{\sum_{\beta \in {J}_{{r}_3,n}} {\left|{\mathbf{H}}^{\mathrm{*}}\mathbf{H}\right|}_{\beta }^{\beta }\sum_{\beta \in {J}_{{r}_4,n}} {\left|{\mathbf{T}}^{\mathrm{*}}\mathbf{T}\right|}_{\beta }^{\beta }}, $$(28)Here a . i ( 2 , H ) $ {{a}}_{.i}^{(2,H)}$, a . i ( 2 , T ) $ {{a}}_{.i}^{(2,T)}$ denote the ith columns of H * A 2 $ {\mathbf{H}}^{\mathbf{*}}{\mathbf{A}}_2$ and T * A 2 $ {\mathbf{T}}^{\mathrm{*}}{\mathbf{A}}_2$, respectively. x zf ( 1 ) $ {x}_{{zf}}^{(1)}$ is the (zf)th element of the first term that is obtained in the point (i). q fj is the (fj)th element of Q A 2 $ {\mathbf{Q}}_{{A}_2}$ that, by (8), can be expessed as

q fj = α I r 2 , n { j } rde t j ( ( A 2 * A 2 ) j . ( a ̇ f . ( 2 ) ) ) α α α I r 2 , n | A 2 * A 2 | α α , $$ {q}_{{fj}}=\frac{\sum_{\alpha \in {I}_{{r}_2,n}\left\{j\right\}} \mathrm{rde}{\mathrm{t}}_j{\left({\left({\mathbf{A}}_2^{\mathrm{*}}{\mathbf{A}}_2\right)}_{j.}\left({\boldsymbol{\dot{{\rm a}}}}_{f.}^{(2)}\right)\right)}_{\alpha }^{\alpha }}{\sum_{\alpha \in {I}_{{r}_2,n}} {\left|{\mathbf{A}}_2^{\mathrm{*}}{\mathbf{A}}_2\right|}_{\alpha }^{\alpha }}, $$where a ̇ f . ( 2 ) $ {\boldsymbol{\dot{{\rm a}}}}_{f.}^{(2)}$ is the fth row of A 2 * A 2 $ {\mathbf{A}}_2^{\mathrm{*}}{\mathbf{A}}_2$. Denote

q zj ( 1 ) f = 1 n x zf ( 1 )   α I r 2 , n { j } rde t j ( ( A 2 * A 2 ) j . ( a ̇ f . ( 2 ) ) ) α α = α I r 2 , n { j } rde t j ( ( A 2 * A 2 ) j . ( x ̃ z . ( 1 ) ) ) α α $$ {q}_{{zj}}^{(1)}:=\sum_{f=1}^n {x}_{{zf}}^{(1)}\enspace \sum_{\alpha \in {I}_{{r}_2,n}\left\{j\right\}} \mathrm{rde}{\mathrm{t}}_j{\left({\left({\mathbf{A}}_2^{\mathrm{*}}{\mathbf{A}}_2\right)}_{j.}\left({\boldsymbol{\dot{{\rm a}}}}_{f.}^{(2)}\right)\right)}_{\alpha }^{\alpha }=\sum_{\alpha \in {I}_{{r}_2,n}\left\{j\right\}} \mathrm{rde}{\mathrm{t}}_j{\left({\left({\mathbf{A}}_2^{\mathrm{*}}{\mathbf{A}}_2\right)}_{j.}\left({\boldsymbol{\tilde{{\rm x}}}}_{z.}^{(1)}\right)\right)}_{\alpha }^{\alpha } $$(29)where x ̃ z . ( 1 ) $ {\boldsymbol{\tilde{{\rm x}}}}_{z.}^{(1)}$ is the zth row of X ̃ 1 = X 1 A 2 * A 2 $ {\boldsymbol{\tilde{{\rm X}}} }_1={\mathbf{X}}_1{\mathbf{A}}_2^{\mathrm{*}}{\mathbf{A}}_2$ for all z, j = 1,…, n and X 1 is found in the point (i). Construct the matrix Q 1 = ( q zj ( 1 ) ) H n × n $ {\mathbf{Q}}_1=({q}_{{zj}}^{(1)})\in {\mathbb{H}}^{n\times n}$. Further, denote

t sj ( 1 ) z = 1 n β J r 4 , n { s } cde t s ( ( T * T ) . s ( a . z ( 2 , T ) ) ) β β q zj ( 1 ) = β J r 4 , n { s } cde t s ( ( T * T ) . s ( t ̃ . j ) ) β β , $$ {t}_{{sj}}^{(1)}:=\sum_{z=1}^n \sum_{\beta \in {J}_{{r}_4,n}\left\{s\right\}} \mathrm{cde}{\mathrm{t}}_s{\left({\left({\mathbf{T}}^{\mathrm{*}}\mathbf{T}\right)}_{.s}\left({\mathbf{a}}_{.z}^{(2,T)}\right)\right)}_{\beta }^{\beta }{q}_{{zj}}^{(1)}=\sum_{\beta \in {J}_{{r}_4,n}\left\{s\right\}} \mathrm{cde}{\mathrm{t}}_s{\left({\left({\mathbf{T}}^{\mathrm{*}}\mathbf{T}\right)}_{.s}\left({\boldsymbol{\tilde{{\rm t}}}}_{.j}\right)\right)}_{\beta }^{\beta }, $$where t ̃ . j $ {\boldsymbol{\tilde{{\rm t}}}}_{.j}$ is the jth column of T ̃ = T * A 2 Q 1 $ \boldsymbol{\tilde{{\rm T}}}={\mathbf{T}}^{\mathrm{*}}{\mathbf{A}}_2{\mathbf{Q}}_1$ and construct the matrix T 1 = ( t qj ( 1 ) ) H n × n $ {\mathbf{T}}_1=({t}_{{qj}}^{(1)})\in {\mathbb{H}}^{n\times n}$. Finally, denote H ̃ H * A 2 T 1 $ \boldsymbol{\tilde{{\rm H}}}:={\mathbf{H}}^{\mathrm{*}}{\mathbf{A}}_2{\mathbf{T}}_1$. From these denotations and the equation (28), it follows

x ij ( 4 ) = β J r 3 , n { i } cde t i ( ( H * H ) . i ( h ̃ . j ) ) β β β J r 3 , n | H * H | β β β J r 4 , n | T * T | β β α I r 2 , n | A 2 * A 2 | α α , $$ {x}_{{ij}}^{(4)}=\frac{\sum_{\beta \in {J}_{{r}_3,n}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left({\left({\mathbf{H}}^{\mathrm{*}}\mathbf{H}\right)}_{.i}\left({\boldsymbol{\tilde{{\rm h}}}}_{.j}\right)\right)}_{\beta }^{\beta }}{\sum_{\beta \in {J}_{{r}_3,n}} {\left|{\mathbf{H}}^{\mathrm{*}}\mathbf{H}\right|}_{\beta }^{\beta }\sum_{\beta \in {J}_{{r}_4,n}} {\left|{\mathbf{T}}^{\mathrm{*}}\mathbf{T}\right|}_{\beta }^{\beta }\sum_{\alpha \in {I}_{{r}_2,n}} {\left|{\mathbf{A}}_2^{\mathrm{*}}{\mathbf{A}}_2\right|}_{\alpha }^{\alpha }}, $$(30)where h ̃ . j $ {\boldsymbol{\tilde{{\rm h}}}}_{.j}$ is the jth column of H ̃ $ \boldsymbol{\tilde{{\rm H}}}$.

  1. For X 5 = H A 2 A 1 C 1 ( A 1 η * ) Q A 2 = ( x ij ( 5 ) ) $ {\mathbf{X}}_5={\mathbf{H}}^{\dagger }{\mathbf{A}}_2{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1({\mathbf{A}}_1^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{Q}}_{{A}_2}=\left({x}_{{ij}}^{(5)}\right)$, we have

x ij ( 5 ) = s = 1 n f = 1 n β J r 3 , n { i } cde t i ( ( H * H ) . i ( a . s ( 2 , H ) ) ) β β   x sf ( 1 )   q fj β J r 3 , n | H * H | β β . $$ {x}_{{ij}}^{(5)}=\frac{\sum_{s=1}^n \sum_{f=1}^n \sum_{\beta \in {J}_{{r}_3,n}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left({\left({\mathbf{H}}^{\mathrm{*}}\mathbf{H}\right)}_{.i}\left({\mathbf{a}}_{.s}^{(2,H)}\right)\right)}_{\beta }^{\beta }\enspace {x}_{{sf}}^{(1)}\enspace {q}_{{fj}}}{\sum_{\beta \in {J}_{{r}_3,n}} {\left|{\mathbf{H}}^{\mathrm{*}}\mathbf{H}\right|}_{\beta }^{\beta }}. $$

Denote H ̂ H * A 2 Q 1 $ \boldsymbol{\hat{{\rm H}}}:={\mathbf{H}}^{\mathrm{*}}{\mathbf{A}}_2{\mathbf{Q}}_1$, where Q 1 = ( q sj ( 1 ) ) $ {\mathbf{Q}}_1=({q}_{{sj}}^{(1)})$ is determined in (29). So, similarly to the previous case, we obtain

x ij ( 5 ) = β J r 3 , n { i } cde t i ( ( H * H ) . i ( h ̂ . j ) ) β β β J r 3 , n | H * H | β β α I r 2 , n | A 2 * A 2 | α α , $$ {x}_{{ij}}^{(5)}=\frac{\sum_{\beta \in {J}_{{r}_3,n}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left({\left({\mathbf{H}}^{\mathrm{*}}\mathbf{H}\right)}_{.i}\left({\boldsymbol{\hat{{\rm h}}}}_{.j}\right)\right)}_{\beta }^{\beta }}{\sum_{\beta \in {J}_{{r}_3,n}} {\left|{\mathbf{H}}^{\mathrm{*}}\mathbf{H}\right|}_{\beta }^{\beta }\sum_{\alpha \in {I}_{{r}_2,n}} {\left|{\mathbf{A}}_2^{\mathrm{*}}{\mathbf{A}}_2\right|}_{\alpha }^{\alpha }}, $$(31)where h ̂ . j $ {\boldsymbol{\hat{{\rm h}}}}_{.j}$ is the jth column of H ̂ $ \boldsymbol{\hat{{\rm H}}}$.

  1. Consider the sixth term X 6 = H A 2 T C 2 ( A 2 η * ) = ( x ij ( 6 ) ) $ {\mathbf{X}}_6={\mathbf{H}}^{\dagger }{\mathbf{A}}_2{\mathbf{T}}^{\dagger }{\mathbf{C}}_2({\mathbf{A}}_2^{\eta \mathrm{*}}{)}^{\dagger }=\left({x}_{{ij}}^{(6)}\right)$. So,

x ij ( 6 ) = q = 1 n β J r 3 , n { i } cde t i ( ( H * H ) . i ( a . q ( 2 , H ) ) ) β β ϕ qj β J r 3 , n | H * H | β β β J r 4 , n | T * T | β β β J r 2 , n | A 2 * A 2 | β β , $$ {x}_{{ij}}^{(6)}=\frac{\sum_{q=1}^n \sum_{\beta \in {J}_{{r}_3,n}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left({\left({\mathbf{H}}^{\mathrm{*}}\mathbf{H}\right)}_{.i}\left({\mathbf{a}}_{.q}^{(2,H)}\right)\right)}_{\beta }^{\beta }{\phi }_{{qj}}}{\sum_{\beta \in {J}_{{r}_3,n}} {\left|{\mathbf{H}}^{\mathrm{*}}\mathbf{H}\right|}_{\beta }^{\beta }\sum_{\beta \in {J}_{{r}_4,n}} {\left|{\mathbf{T}}^{\mathrm{*}}\mathbf{T}\right|}_{\beta }^{\beta }\sum_{\beta \in {J}_{{r}_2,n}} {\left|{\mathbf{A}}_2^{\mathrm{*}}{\mathbf{A}}_2\right|}_{\beta }^{\beta }}, $$(32)where

ϕ qj = β J r 4 , n { q } cde t q ( ( T * T ) .   q ( φ . j A 2 ) ) β β = - η α I r 2 , n { j } rde t j ( ( A 2 * A 2 ) j . ( φ q   . T ) ) α α η , $$ {\phi }_{{qj}}=\sum_{\beta \in {J}_{{r}_4,n}\left\{q\right\}} \mathrm{cde}{\mathrm{t}}_q{\left({\left({\mathbf{T}}^{\mathrm{*}}\mathbf{T}\right)}_{.\enspace q}\left({\phi }_{.j}^{{A}_2}\right)\right)}_{\beta }^{\beta }=-\eta \sum_{\alpha \in {I}_{{r}_2,n}\left\{j\right\}} \mathrm{rde}{\mathrm{t}}_j{\left({\left({\mathbf{A}}_2^{\mathrm{*}}{\mathbf{A}}_2\right)}_{j.}\left({\phi }_{q\enspace.}^T\right)\right)}_{\alpha }^{\alpha }\eta, $$(33)and

φ . j A 2 = [ - η α I r 2 , n { f } rde t j ( ( A 2 * A 2 ) j . ( c q . ( 23 ) , η ) ) α α η ] H 1 × n , q = 1 , , n ,   $$ \begin{array}{cc}{\varphi }_{.j}^{{A}_2}=\left[-\eta \sum_{\alpha \in {I}_{{r}_2,n}\left\{f\right\}} \mathrm{rde}{\mathrm{t}}_j{\left({\left({\mathbf{A}}_2^{\mathrm{*}}{\mathbf{A}}_2\right)}_{j.}\left({\mathbf{c}}_{q.}^{(23),\eta }\right)\right)}_{\alpha }^{\alpha }\eta \right]\in {\mathbb{H}}^{1\times n},& q=1,\dots,n,\end{array}\enspace $$

φ q . T = [ - η β J r 4 , n { q } cde t q ( ( T * T ) . q ( c . l ( 23 ) ) ) β β η ] H n × 1 ,   l = 1 , , n . $$ \begin{array}{cc}{\varphi }_{q.}^T=\left[-\eta \sum_{\beta \in {J}_{{r}_4,n}\left\{q\right\}} \mathrm{cde}{\mathrm{t}}_q{\left({\left({\mathbf{T}}^{\mathrm{*}}\mathbf{T}\right)}_{.q}\left({\mathbf{c}}_{.l}^{(23)}\right)\right)}_{\beta }^{\beta }\eta \right]\in {\mathbb{H}}^{n\times 1},\enspace & l=1,\dots,n.\end{array} $$Here c . l ( 23 ) $ {\mathbf{c}}_{.l}^{(23)}$ is the lth column of C 23 = T * C 2 A 2 η $ {\mathbf{C}}_{23}={\mathbf{T}}^{\mathrm{*}}{\mathbf{C}}_2{\mathbf{A}}_2^{\eta }$ and c q . ( 23 ) , η $ {\mathbf{c}}_{q.}^{(23),\eta }$ is the qth row of C 23 η $ {\mathbf{C}}_{23}^{\eta }$. Construct the matrix Φ = ( ϕ qj ) $ \mathbf{\Phi }=\left({\phi }_{{qj}}\right)$ such that ϕ qj is determined in (33) and denote Φ ̃ H * A 2 Φ $ \tilde{\Phi}:={\mathbf{H}}^{\mathrm{*}}{\mathbf{A}}_2\mathrm{\Phi }$. From this denotation and the equation (32), it follows

x ij ( 6 ) = β J r 3 , n { i } cde t i ( ( H * H ) . i ( ϕ ̃ . j ) ) β β J r 3 , n | H * H | β β β J r 4 , n | T * T | β β β J r 2 , n | A 2 * A 2 | β β , $$ {x}_{{ij}}^{(6)}=\frac{\sum_{\beta \in {J}_{{r}_3,n}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left({\left({\mathbf{H}}^{\mathrm{*}}\mathbf{H}\right)}_{.i}\left({\tilde{\phi}}_{.j}\right)\right)}_{\beta }}{\sum_{\beta \in {J}_{{r}_3,n}} {\left|{\mathbf{H}}^{\mathrm{*}}\mathbf{H}\right|}_{\beta }^{\beta }\sum_{\beta \in {J}_{{r}_4,n}} {\left|{\mathbf{T}}^{\mathrm{*}}\mathbf{T}\right|}_{\beta }^{\beta }\sum_{\beta \in {J}_{{r}_2,n}} {\left|{\mathbf{A}}_2^{\mathrm{*}}{\mathbf{A}}_2\right|}_{\beta }^{\beta }}, $$(34)where ϕ ̃ . j $ {\tilde{\phi}}_{.j}$ is the jth column of Φ ̃ $ \boldsymbol{\tilde{\Phi}}$.

  1. Finally, consider the seventh term X 7 = T A 2 A 1 C 1 ( A 1 η * ) A 2 η * ( H η * ) = ( x ij ( 7 ) ) $ {\mathbf{X}}_7={\mathbf{T}}^{\dagger }{\mathbf{A}}_2{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1({\mathbf{A}}_1^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{A}}_2^{\eta \mathrm{*}}({\mathbf{H}}^{\eta \mathrm{*}}{)}^{\dagger }=\left({x}_{{ij}}^{(7)}\right)$ of (15). Taking into account (4) for T and (6) for ( H η * ) $ ({H}^{\eta \mathrm{*}}{)}^{\dagger }$, we get

x ij ( 7 ) = q = 1 n f = 1 n β J r 4 , n { i } cde t i ( ( T * T ) . i ( a . q ( 2 , T ) ) ) β β   x qf ( 1 ) ( - η α I r 3 , n { j } rde t j ( ( H * H ) j . ( a f . ( 2 , H , η * ) ) ) α α η ) β J r 4 , n | T * T | β β α I r 3 , r | H * H | α α , $$ {x}_{{ij}}^{(7)}=\frac{\sum_{q=1}^n \sum_{f=1}^n \sum_{\beta \in {J}_{{r}_4,n}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left({\left({\mathbf{T}}^{\mathrm{*}}\mathbf{T}\right)}_{.i}\left({\mathbf{a}}_{.q}^{(2,T)}\right)\right)}_{\beta }^{\beta }\enspace {x}_{{qf}}^{(1)}\left(-\eta \sum_{\alpha \in {I}_{{r}_3,n}\left\{j\right\}} \mathrm{rde}{\mathrm{t}}_j{\left({\left({\mathbf{H}}^{\mathrm{*}}\mathbf{H}\right)}_{j.}\left({\mathbf{a}}_{f.}^{(2,H,\eta \mathrm{*})}\right)\right)}_{\alpha }^{\alpha }\eta \right)}{\sum_{\beta \in {J}_{{r}_4,n}} {\left|{\mathbf{T}}^{\mathrm{*}}\mathbf{T}\right|}_{\beta }^{\beta }\sum_{\alpha \in {I}_{{r}_3,r}} {\left|{\mathbf{H}}^{\mathrm{*}}\mathbf{H}\right|}_{\alpha }^{\alpha }}, $$(35)where a . q ( 2 , T ) $ {\mathbf{a}}_{.q}^{(2,T)}$, a f . ( 2 , H , η * ) $ {\mathbf{a}}_{f.}^{(2,H,\eta \mathrm{*})}$ are the qth column of T * A 2 $ {\mathbf{T}}^{\mathrm{*}}{\mathbf{A}}_2$ and the fth row of A 2 η * H $ {\mathbf{A}}_2^{\eta \mathrm{*}}\mathbf{H}$, respectively. Denote

ω qj f = 1 n x qf ( 1 ) ( - η α I r 3 , n { j } rde t j ( ( H * H ) j . ( a f . ( 2 , H , η * ) ) ) α α η ) = - η α I r 3 , n { j } rde t j ( ( H * H ) j . ( x ̂ q . ( 1 , η ) ) ) α α η , $$ {\omega }_{{qj}}:=\sum_{f=1}^n {x}_{{qf}}^{(1)}\left(-\eta \sum_{\alpha \in {I}_{{r}_3,n}\left\{j\right\}} \mathrm{rde}{\mathrm{t}}_j{\left({\left({\mathbf{H}}^{\mathrm{*}}\mathbf{H}\right)}_{j.}\left({\mathbf{a}}_{f.}^{(2,H,\eta \mathrm{*})}\right)\right)}_{\alpha }^{\alpha }\eta \right)=-\eta \sum_{\alpha \in {I}_{{r}_3,n}\left\{j\right\}} \mathrm{rde}{\mathrm{t}}_j{\left({\left({\mathbf{H}}^{\mathrm{*}}\mathbf{H}\right)}_{j.}\left({\boldsymbol{\hat{{\rm x}}}}_{q.}^{(1,\eta )}\right)\right)}_{\alpha }^{\alpha }\eta, $$(36)where x ̂ q . ( 1 , η ) $ {\boldsymbol{\hat{{\rm x}}}}_{q.}^{(1,\eta )}$ is the qth row of X ̂ 1 η X 1 η A 2 η * H $ {\boldsymbol{\hat{{\rm X}}}}_1^{\eta }:={\mathbf{X}}_1^{\eta }{\mathbf{A}}_2^{\eta \mathrm{*}}\mathbf{H}$. Construct the matrix Ω = ( ω qj ) $ \mathrm{\Omega }=\left({\omega }_{{qj}}\right)$ such that ω qj is determined in (36) and denote Ω ̂ T * A 2 Ω $ \widehat{\mathbf{\Omega }}:={\mathbf{T}}^{\mathrm{*}}{\mathbf{A}}_2\mathbf{\Omega }$. From these denotations and the equation (35), it follows

x ij ( 7 ) = β J r 4 , n { i } cde t i ( ( T * T ) . i ( ω ̂ . j ) ) β β β J r 4 , n | T * T | β β α I r 3 , r | H * H | α α , $$ {x}_{{ij}}^{(7)}=\frac{\sum_{\beta \in {J}_{{r}_4,n}\left\{i\right\}} \mathrm{cde}{\mathrm{t}}_i{\left({\left({\mathbf{T}}^{\mathbf{*}}\mathbf{T}\right)}_{.i}\left({\widehat{\omega }}_{.j}\right)\right)}_{\beta }^{\beta }}{\sum_{\beta \in {J}_{{\mathrm{r}}_4,n}} {\left|{\mathbf{T}}^{\mathrm{*}}\mathbf{T}\right|}_{\beta }^{\beta }\sum_{\alpha \in {I}_{{r}_3,r}} {\left|{\mathbf{H}}^{\mathrm{*}}\mathbf{H}\right|}_{\alpha }^{\alpha }}, $$(37)where ω ̂ . j $ {\widehat{\omega }}_{.j}$ is the jth column of Ω ̂ $ \widehat{\mathbf{\Omega }}$.

Therefore, we proved the following theorem.

Theorem 3.1.

Suppose that A 1 H r 1 m × n $ {\mathbf{A}}_1\in {\mathbb{H}}_{{r}_1}^{m\times n}$, A 2 H r 2 k × n $ {\mathbf{A}}_2\in {\mathbb{H}}_{{r}_2}^{k\times n}$ , and rank H = rank ( A 2 L A 1 ) = r 3 $ \mathrm{rank}\mathbf{H}=\mathrm{rank}\left({\mathbf{A}}_2{\mathbf{L}}_{{A}_1}\right)={r}_3$, rank T = rank ( R H A 2 ) = r 4 $ \mathrm{rank}\mathbf{T}=\mathrm{rank}\left({{R}}_H{\mathbf{A}}_2\right)={r}_4$. The partial solution (15) to the system (2) by components is

x ij = δ = 1 4 x ij ( δ ) - δ = 5 7 x ij ( δ ) , i , j = 1 , , n , $$ \begin{array}{cc}{x}_{{ij}}=\sum_{\delta =1}^4 {x}_{{ij}}^{(\delta )}-\sum_{\delta =5}^7 {x}_{{ij}}^{(\delta )},& i,j=1,\dots,n,\end{array} $$ where the summand x ij ( 1 ) $ {x}_{{ij}}^{(1)}$ has the determinantal representations (19) and (23), x ij ( 2 ) $ {x}_{{ij}}^{(2)}$(24) and (25), x ij ( 3 ) $ {x}_{{ij}}^{(3)}$(26) and (27), x ij ( 4 ) $ {x}_{{ij}}^{(4)}$(30), x ij ( 5 ) $ {x}_{{ij}}^{(5)}$(31), x ij ( 6 ) $ {x}_{{ij}}^{(6)}$(34), and x ij ( 7 ) $ {x}_{{ij}}^{(7)}$(37).

Remark 3.2.

Theorem 3.1 gives the direct method of finding of a general solution to the system (2) that is an analog of Cramer’s rule. For this we need in ranks of given matrices and satisfying by them the consistent conditions (13)(14).Let, now,

C 1 = C 1 η * , C 2 = C 2 η * , $$ \begin{array}{cc}{\mathbf{C}}_1={\mathbf{C}}_1^{\eta \mathrm{*}},& {\mathbf{C}}_2={\mathbf{C}}_2^{\eta \mathrm{*}},\end{array} $$(38)

C 1 = - C 1 η * , C 2 = - C 2 η * . $$ \begin{array}{cc}{\mathbf{C}}_1=-{\mathbf{C}}_1^{\eta \mathrm{*}},& {\mathbf{C}}_2=-{\mathbf{C}}_2^{\eta \mathrm{*}}.\end{array} $$(39)

The partial η-Hermitian solution Y 1 and η-skew-Hermitian solution Y 2 to the system (2) with the restrictions (38) and (39), respectively, can be expressed as

Y 1 = 1 2 ( X + X η * ) , Y 2 = 1 2 ( X - X η * ) , $$ \begin{array}{cc}{\mathbf{Y}}_1=\frac{1}{2}\left(\mathbf{X}+{\mathbf{X}}^{\eta \mathrm{*}}\right),& {\mathbf{Y}}_2=\frac{1}{2}\left(\mathbf{X}-{\mathbf{X}}^{\eta \mathrm{*}}\right),\end{array} $$where X is an arbitrary solution to the system (2). Due to the expression of the general solution (15) and taking into account (38), we have

X η * = ( x ij η * ) = ( x ̅ ji η ) = A 1 C 1 ( A 1 η * ) + A 2 C 2 ( H η * ) + H C 2 ( T η * ) + + P A 2 ( A 1 η * ) C 1 A 1 A 2 ( T η * ) A 2 η * ( H η * ) - P A 2 ( A 1 η * ) C 1 A 1 A 2 η * ( H η * ) - - A 2 C 2 ( T η * ) A 2 η * ( H η * ) - H A 2 A 1 C 1 ( A 1 η * ) A 2 η * ( T η * ) , $$ \begin{array}{cc}& {\mathbf{X}}^{\eta \mathrm{*}}=\left({x}_{{ij}}^{\eta \mathrm{*}}\right)=\left({{\bar{x}}_{{ji}}}^{\eta }\right)={\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1{\left({\mathbf{A}}_1^{\eta \mathrm{*}}\right)}^{\dagger }+{\mathbf{A}}_2^{\dagger }{\mathbf{C}}_2({\mathbf{H}}^{\eta \mathrm{*}}{)}^{\dagger }+{\mathbf{H}}^{\dagger }{\mathbf{C}}_2({\mathbf{T}}^{\eta \mathrm{*}}{)}^{\dagger }+\\ & +{\mathbf{P}}_{{A}_2}({\mathbf{A}}_1^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{C}}_1{\mathbf{A}}_1^{\dagger }{\mathbf{A}}_2({\mathbf{T}}^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{A}}_2^{\eta \mathrm{*}}({\mathbf{H}}^{\eta \mathrm{*}}{)}^{\dagger }-{\mathbf{P}}_{{A}_2}({\mathbf{A}}_1^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{C}}_1{\mathbf{A}}_1^{\dagger }{\mathbf{A}}_2^{\eta \mathrm{*}}({\mathbf{H}}^{\eta \mathrm{*}}{)}^{\dagger }-\\ & -{\mathbf{A}}_2^{\dagger }{\mathbf{C}}_2({\mathbf{T}}^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{A}}_2^{\eta \mathrm{*}}({\mathbf{H}}^{\eta \mathrm{*}}{)}^{\dagger }-{\mathbf{H}}^{\dagger }{\mathbf{A}}_2{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1({\mathbf{A}}_1^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{A}}_2^{\eta \mathrm{*}}({\mathbf{T}}^{\eta \mathrm{*}}{)}^{\dagger },\end{array} $$

But taking into account (39), we obtain

X η * = - ( x ̅ ji η ) = - A 1 C 1 ( A 1 η * ) - A 2 C 2 ( H η * ) - H C 2 ( T η * ) - - P A 2 ( A 1 η * ) C 1 A 1 A 2 ( T η * ) A 2 η * ( H η * ) + P A 2 ( A 1 η * ) C 1 A 1 A 2 η * ( H η * ) + A 2 C 2 ( T η * ) A 2 η * ( H η * ) + H A 2 A 1 C 1 ( A 1 η * ) A 2 η * ( T η * ) , $$ \begin{array}{cc}& {\mathbf{X}}^{\eta \mathrm{*}}=-\left({{\bar{x}}_{{ji}}}^{\eta }\right)=-{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1{\left({\mathbf{A}}_1^{\eta \mathrm{*}}\right)}^{\dagger }-{\mathbf{A}}_2^{\dagger }{\mathbf{C}}_2({\mathbf{H}}^{\eta \mathrm{*}}{)}^{\dagger }-{\mathbf{H}}^{\dagger }{\mathbf{C}}_2({\mathbf{T}}^{\eta \mathrm{*}}{)}^{\dagger }-\\ & -{\mathbf{P}}_{{A}_2}({\mathbf{A}}_1^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{C}}_1{\mathbf{A}}_1^{\dagger }{\mathbf{A}}_2({\mathbf{T}}^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{A}}_2^{\eta \mathrm{*}}({\mathbf{H}}^{\eta \mathrm{*}}{)}^{\dagger }+{\mathbf{P}}_{{A}_2}({\mathbf{A}}_1^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{C}}_1{\mathbf{A}}_1^{\dagger }{\mathbf{A}}_2^{\eta \mathrm{*}}({\mathbf{H}}^{\eta \mathrm{*}}{)}^{\dagger }+\\ & {\mathbf{A}}_2^{\dagger }{\mathbf{C}}_2({\mathbf{T}}^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{A}}_2^{\eta \mathrm{*}}({\mathbf{H}}^{\eta \mathrm{*}}{)}^{\dagger }+{\mathbf{H}}^{\dagger }{\mathbf{A}}_2{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1({\mathbf{A}}_1^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{A}}_2^{\eta \mathrm{*}}({\mathbf{T}}^{\eta \mathrm{*}}{)}^{\dagger },\end{array} $$So, the partial η-Hermitian and η-skew-Hermitian solutions to the system (2) with the restrictions (38) and (39), respectively, have the following expression:

Y 1 = Y 2 = A 1 C 1 ( A 1 η * ) + 1 2 [ H C 2 ( A 2 η * ) + A 2 C 2 ( H η * ) ] + 1 2 [ T C 2 ( H η * ) + H C 2 ( T η * ) ] + 1 2 [ H A 2 T A 2 A 1 C 1 ( A 1 η * ) P A 2 + P A 2 ( A 1 η * ) C 1 A 1 A 2 ( T η * ) A 2 η * ( H η * ) ] - 1 2 [ H A 2 A 1 C 1 ( A 1 η * ) P A 2 + P A 2 ( A 1 η * ) C 1 A 1 A 2 η * ( H η * ) ] - 1 2 [ H A 2 T C 2 ( A 2 η * ) + A 2 C 2 ( T η * ) A 2 η * ( H η * ) ] - 1 2 [ T A 2 A 1 C 1 ( A 1 η * ) A 2 η * ( H η * ) + H A 2 A 1 C 1 ( A 1 η * ) A 2 η * ( T η * ) ] . $$ \begin{array}{cc}{\mathbf{Y}}_1={\mathbf{Y}}_2=& \\ & {\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1{\left({\mathbf{A}}_1^{\eta \mathrm{*}}\right)}^{\dagger }+\frac{1}{2}\left[{\mathbf{H}}^{\dagger }{\mathbf{C}}_2({\mathbf{A}}_2^{\eta \mathrm{*}}{)}^{\dagger }+{\mathbf{A}}_2^{\dagger }{\mathbf{C}}_2({\mathbf{H}}^{\eta \mathrm{*}}{)}^{\dagger }\right]+\frac{1}{2}\left[{\mathbf{T}}^{\dagger }{\mathbf{C}}_2({\mathbf{H}}^{\eta \mathrm{*}}{)}^{\dagger }+{\mathbf{H}}^{\dagger }{\mathbf{C}}_2({\mathbf{T}}^{\eta \mathrm{*}}{)}^{\dagger }\right]\\ & \begin{array}{c}+\frac{1}{2}\left[{\mathbf{H}}^{\dagger }{\mathbf{A}}_2{\mathbf{T}}^{\dagger }{\mathbf{A}}_2{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1({\mathbf{A}}_1^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{P}}_{{A}_2}+{\mathbf{P}}_{{A}_2}({\mathbf{A}}_1^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{C}}_1{\mathbf{A}}_1^{\dagger }{\mathbf{A}}_2({\mathbf{T}}^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{A}}_2^{\eta \mathrm{*}}({\mathbf{H}}^{\eta \mathrm{*}}{)}^{\dagger }\right]\\ -\frac{1}{2}\left[{\mathbf{H}}^{\dagger }{\mathbf{A}}_2{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1({\mathbf{A}}_1^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{P}}_{{A}_2}+{\mathbf{P}}_{{A}_2}({\mathbf{A}}_1^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{C}}_1{\mathbf{A}}_1^{\dagger }{\mathbf{A}}_2^{\eta \mathrm{*}}({\mathbf{H}}^{\eta \mathrm{*}}{)}^{\dagger }\right]\\ \begin{array}{c}-\frac{1}{2}\left[{\mathbf{H}}^{\dagger }{\mathbf{A}}_2{\mathbf{T}}^{\dagger }{\mathbf{C}}_2({\mathbf{A}}_2^{\eta \mathrm{*}}{)}^{\dagger }+{\mathbf{A}}_2^{\dagger }{\mathbf{C}}_2({\mathbf{T}}^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{A}}_2^{\eta \mathrm{*}}({\mathbf{H}}^{\eta \mathrm{*}}{)}^{\dagger }\right]\\ -\frac{1}{2}\left[{\mathbf{T}}^{\dagger }{\mathbf{A}}_2{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1({\mathbf{A}}_1^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{A}}_2^{\eta \mathrm{*}}({\mathbf{H}}^{\eta \mathrm{*}}{)}^{\dagger }+{\mathbf{H}}^{\dagger }{\mathbf{A}}_2{\mathbf{A}}_1^{\dagger }{\mathbf{C}}_1({\mathbf{A}}_1^{\eta \mathrm{*}}{)}^{\dagger }{\mathbf{A}}_2^{\eta \mathrm{*}}({\mathbf{T}}^{\eta \mathrm{*}}{)}^{\dagger }\right].\end{array}\end{array}\end{array} $$

The determinantal representations of Y 1 = ( y ij ( 1 ) ) $ {\mathbf{Y}}_1=({y}_{{ij}}^{(1)})$ and Y 2 = ( y ij ( 2 ) ) $ {\mathbf{Y}}_2=({y}_{{ij}}^{(2)})$ can be obtained by components respectively as

y ij ( 1 ) = 1 2 ( x ij + x ̅ ji η ) , y ij ( 2 ) = 1 2 ( x ij - x ̅ ji η ) , $$ \begin{array}{cc}{y}_{{ij}}^{(1)}=\frac{1}{2}\left({x}_{{ij}}+{{\bar{x}}_{{ji}}}^{\eta }\right),& {y}_{{ij}}^{(2)}=\frac{1}{2}\left({x}_{{ij}}-{{\bar{x}}_{{ji}}}^{\eta }\right),\end{array} $$for all i, j = 1, …, n, where x ij is determined by Theorem 3.1.

Conclusion

Using row-column noncommutative determinants previously introduced by the author, the determinantal representations (analogs of Cramer’s rule) of the general, η-Hermitian and η-skew-Hermitian solutions to the system of quaternion matrix equations A 1 X A 1 η * = C 1 $ {\mathbf{A}}_1\mathbf{X}{\mathbf{A}}_1^{\eta \mathrm{*}}={\mathbf{C}}_1$, A 2 X A 2 η * = C 2 $ {\mathbf{A}}_2\mathbf{X}{\mathbf{A}}_2^{\eta \mathrm{*}}={\mathbf{C}}_2$ have been derived. For these purposes, determinantal representations of the Moore–Penrose inverses of a quaternion matrix, its Hermitian adjoint and η-Hermitian adjoint matrices have been explored and used.

References

  1. Handson A, Hui H (1995), Quaternion frame approach to streamline visualization. IEEE Trans Vis Computer Graph 1, 2, 164–172. [CrossRef] [Google Scholar]
  2. Sangwine SJ (1996), Fourier transforms of colour images using quaternion or hypercomplex number. Electr Lett 32, 21, 1979–1980. [CrossRef] [Google Scholar]
  3. Ell TA, Sangwine SJ (2007), Hypercomplex Fourier transforms of color images. IEEE Trans Image Process 16, 1, 22–35. [CrossRef] [PubMed] [Google Scholar]
  4. Wang J, Li T, Shi YQ, Lian S, Ye J (2016), Forensics feature analysis in quaternion wavelet domain for distinguishing photographic images and computer graphics. Multimedia Tools Appl 76, 22, 23712–23737. [Google Scholar]
  5. Chen B, Shu H, Coatrieux G, Chen G, Sun X, Coatrieux JL (2015), Color image analysis by quaternion-type moments. J Math Imaging Vision 51, 1, 124–144. [CrossRef] [Google Scholar]
  6. Gibbon JD, Holm DD, Kerr RM, Roulstone I (2006), Quaternions and particle dynamics in the Euler fluid equations. Nonlinearity 19, 1969–1983. [Google Scholar]
  7. Roubtsov VN, Roulstone I (2001), Holomorphic structures in hydrodynamical models of nearly geostrophic flow. Proc R Society London Ser A 457, 1519–1531. [CrossRef] [Google Scholar]
  8. Adler SL (1995), Quaternionic quantum mechanics and quantum fields, Oxford University Press, New York. [Google Scholar]
  9. Leo S, Ducati G (2012), Delay time in quaternionic quantum mechanics. J Math Phys 53, 2, Article ID 022102, 8 p. [PubMed] [Google Scholar]
  10. Gupta S (1998), Linear quaternion equations with application to spacecraft attitude propagation. IEEE Aerospace Conf Proc 1, 69–76. [Google Scholar]
  11. Song C, Sang J, Seung H, Nam HS (2006), Robust control of the missile attitude based on quaternion feedback. Control Eng Pract 14, 7, 811–818. [Google Scholar]
  12. Wie B, Weiss H, Arapostathis A (1989), Quaternion feedback regulator for spacecraft eigenaxis rotations. J Guidance Control Dyn 12, 375–380. [CrossRef] [Google Scholar]
  13. Took CC, Mandic DP (2009), The quaternion LMS algorithm for adaptive filtering of hypercomplex real world processes. IEEE Trans Signal Process 57, 4, 1316–1327. [Google Scholar]
  14. Took CC, Mandic DP (2010), A quaternion widely linear adaptive filter. IEEE Trans Signal Process 58, 8, 4427–4431. [Google Scholar]
  15. Took CC, Mandic DP (2011), Augmented second-order statistics of quaternion random signals. Signal Process 91, 214–224. [CrossRef] [Google Scholar]
  16. Mitra SK (1973), A pair of simultaneous linear matrix A1XB1 = C1 and A2XB2 = C2. Proc Camb Philos Soc 74, 213–216. [Google Scholar]
  17. Mitra SK (1990), A pair of simultaneous linear matrix equations and a matrix programming problem. Linear Algebra Its Appl 131, 97–123. [CrossRef] [Google Scholar]
  18. Shinozaki N, Sibuya M (1974), Consistency of a pair of matrix equations with an application. Keio Eng Rep 27, 141–146. [Google Scholar]
  19. Özgüler AB, Akar N (1991), A common solution to a pair of linear matrix equations over a principal domain. Linear Algebra Its Appl 144, 85–99. [CrossRef] [Google Scholar]
  20. Navarra A, Odell PL, Young DM (2001), A representation of the general common solution to the matrix equations A1XB1 = C1 and A2XB2 = C2 with applications. Comput Math Appl 41, 929–935. [Google Scholar]
  21. Wang QW (2005), The general solution to a system of real quaternion matrix equations. Comput Math Appl 49, 665–675. [Google Scholar]
  22. Horn RA, Zhang F (2012), A generalization of the complex Autonne-Takagi factorization to quaternion matrices. Linear Multilinear Algebra 60, 11–12, 1239–1244. [CrossRef] [Google Scholar]
  23. Took CC, Mandic DP, Zhang FZ (2011), On the unitary diagonalization of a special class of quaternion matrices. Appl Math Lett 24, 1806–1809. [Google Scholar]
  24. Yuan SF, Wang QW (2012), Two special kinds of least squares solutions for the quaternion matrix equation AXB + CXD = E. Electr J Linear Algebra 23, 257–274. [Google Scholar]
  25. Liu X (2018), The η-anti-Hermitian solution to some classic matrix equations. Appl Math Comput 320, 264–270. [Google Scholar]
  26. He ZH, Wang QW (2013), A real quaternion matrix equation with applications. Linear Multilinear Algebra 61, 6, 725–740. [CrossRef] [Google Scholar]
  27. Beik FPA, Ahmadi-As S (2015), An iterative algorithm for η-(anti)-Hermitian least-squares solutions of quaternion matrix equations. Electr J Linear Algebra 30, 372–401. [Google Scholar]
  28. Futorny V, Klymchuk T, Sergeichuk VV (2016), Roth’s solvability criteria for the matrix equations AX-Formula B = C and X-AFormula B = C over the skew field of quaternions with an involutive automorphism qFormula . Linear Algebra Appl 510, 246–258. [Google Scholar]
  29. He ZH, Wang QW (2014), The η-bihermitian solution to a system of real quaternion matrix equations. Linear Multilinear Algebra 62, 11, 1509–1528. [CrossRef] [Google Scholar]
  30. He ZH, Wang QW, Zhang Y (2017), Simultaneous decomposition of quaternion matrices involving η-Hermicity with applications. Applied Math Comput 298, 13–35. [Google Scholar]
  31. He ZH, Liu J, Tam TY (2017), The general ϕ-Hermitian solution to mixed pairs of quaternion matrix Sylvester equations. Electr J Linear Algebra 32, 475–499. [CrossRef] [Google Scholar]
  32. He ZH (2019), Structure, properties and applications of some simultaneous decompositions for quaternion matrices involving ϕ-skew-Hermicity. Adv Appl Clifford Algebras 29, 6. [CrossRef] [Google Scholar]
  33. Klimchuk T, Sergeichuk VV (2014), Consimilarity and quaternion matrix equations AX-Formula B = C and X-AFormula B = C. Special Matrices 2, 180–186. [CrossRef] [Google Scholar]
  34. Rodman L (2014), Topics in Quaternion Linear Algebra, Princeton University Press, Princeton. [Google Scholar]
  35. Rehman A, Wang QW, He ZH (2015), Solution to a system of real quaternion matrix equations encompassing η-Hermicity. Appl Math Comput 265, 945–957. [Google Scholar]
  36. Rehman A, Wang QW, Ali I, Akram M, Ahmad MO (2017), A constraint system of generalized Sylvester quaternion matrix equations. Adv Appl Clifford Algebras 27, 4, 3183–3196. [CrossRef] [Google Scholar]
  37. Zhang Y, Wang RH (2013), The exact solution of a system of quaternion matrix equations involving η-Hermicity. Appl Math Comput 222, 201–209. [Google Scholar]
  38. Yuan SF, Wang QW, Xiong ZP (2014), The least squares eta-Hermitian problems of quaternion matrix equation. Filomat 28, 6, 1153–1165. [CrossRef] [Google Scholar]
  39. Bapat RB, Bhaskara KPS, Prasad KM (1990), Generalized inverses over integral domains. Linear Algebra Appl 140, 181–196. [Google Scholar]
  40. Stanimirovic PS (1996), General determinantal representation of pseudoinverses of matrices. Matematički Vesnik 48, 1–9. [Google Scholar]
  41. Kyrchei I (2015), Cramer’s rule for generalized inverse solutions, in: I Kyrchei (Ed.), Advances in Linear Algebra Research, Nova Science Publishers, New York, pp. 79–132. [Google Scholar]
  42. Kyrchei I (2008), Cramer’s rule for quaternion systems of linear equations. J Math Sci 155, 6, 839–858. [CrossRef] [Google Scholar]
  43. Kyrchei I (2012), The theory of the column and row determinants in a quaternion linear algebra, in: AR Baswell (Ed.), Advances in Mathematics Research, 15, Nova Science Publishers, New York, pp. 301–359. [Google Scholar]
  44. Kyrchei I (2011), Determinantal representations of the Moore-Penrose inverse over the quaternion skew field and corresponding Cramer’s rules. Linear Multilinear Algebra 59, 4, 413–431. [CrossRef] [Google Scholar]
  45. Kyrchei I (2013), Explicit representation formulas for the minimum norm least squares solutions of some quaternion matrix equations. Linear Algebra Appl 438, 1, 136–152. [Google Scholar]
  46. Kyrchei I (2014), Determinantal representations of the Drazin inverse over the quaternion skew field with applications to some matrix equations. Appl Math Comput 238, 193–207. [Google Scholar]
  47. Kyrchei I (2015), Determinantal representations of the W-weighted Drazin inverse over the quaternion skew field. Appl Math Comput 264, 453–465. [Google Scholar]
  48. Kyrchei I (2016), Explicit determinantal representation formulas of W-weighted Drazin inverse solutions of some matrix equations over the quaternion skew field. Math Probl Eng 2016, Article ID 8673809, 13 p. [Google Scholar]
  49. Kyrchei I (2017), Determinantal representations of the Drazin and W-weighted Drazin inverses over the quaternion skew field with applications, in: S. Griffin (Ed.), Quaternions: Theory and Applications, Nova Science Publishers, New York, pp. 201–275. [Google Scholar]
  50. Kyrchei I (2017), Weighted singular value decomposition and determinantal representations of the quaternion weighted Moore-Penrose inverse. Appl Math Comput 309, 1–16. [Google Scholar]
  51. Kyrchei I (2017), Determinantal representations of the quaternion weighted Moore-Penrose inverse and its applications, in: AR Baswell (Ed.), Advances in Mathematics Research, 23, Nova Science Publications, New York, pp. 35–96. [Google Scholar]
  52. Kyrchei I (2018), Explicit determinantal representation formulas for the solution of the two-sided restricted quaternionic matrix equation. J Appl Math Comput 58, 1–2, 335–365. [Google Scholar]
  53. Kyrchei I (2018), Determinantal representations of solutions to systems of quaternion matrix equations. Adv Appl Clifford Algebras 28, 1, 23. [CrossRef] [Google Scholar]
  54. Kyrchei I (2018), Cramer’s rules for Sylvester quaternion matrix equation and its special cases. Adv Appl Clifford Algebras 28, 5, 90. [CrossRef] [Google Scholar]
  55. Kyrchei I (2018), Determinantal representations of solutions and Hermitian solutions to some system of two-sided quaternion matrix equations. J Math 2018, ID 6294672, 12 p. [CrossRef] [Google Scholar]
  56. Kyrchei I (2018), Cramer’s rules for the system of two-sided matrix equations and of its special cases, in: HA Yasser (Ed.), Matrix Theory-Applications and Theorems, IntechOpen, London, pp. 3–20. [Google Scholar]
  57. Kyrchei I (2019), Determinantal representations of general and (skew-)Hermitian solutions to the generalized Sylvester-type quaternion matrix equation. Abstract Appl Anal 2019, Article ID 5926832, 14 p. [CrossRef] [Google Scholar]
  58. Song GJ, Wang QW, Chang HX (2011), Cramer rule for the unique solution of restricted matrix equations over the quaternion skew field. Comput Math Appl 61, 1576–1589. [Google Scholar]
  59. Song G (2013), Characterization of the W-weighted Drazin inverse over the quaternion skew field with applications. Electr J Linear Algebra 26, 1–14. [Google Scholar]
  60. Song GJ, Dong CZ (2017), New results on condensed Cramer’s rule for the general solution to some restricted quaternion matrix equations. J Appl Math Comput 53, 321–341. [Google Scholar]
  61. Song GJ, Wang QW, Yu SW (2018), Cramer’s rule for a system of quaternion matrix equations with applications. Appl Math Comput 336, 490–499. [Google Scholar]
  62. Aslaksen H (1996), Quaternionic determinants. Math Intell 18, 3, 57–65. [CrossRef] [Google Scholar]
  63. Cohen N, De Leo S (2000), The quaternionic determinant. Electr J Linear Algebra 7, 100–111. [Google Scholar]
  64. Chen L (1991), Definition of determinant and Cramer solutions over quaternion field. Acta Math Sin (N.S.) 7, 171–180. [CrossRef] [Google Scholar]
  65. Dieudonné J (1943), Les determinantes sur un corps non-commutatif. Bull Soc Math France 71, 27–45. [CrossRef] [Google Scholar]
  66. Dyson FJ (1972), Quaternion determinants. Helvetica Phys Acta 45, 289–302. [Google Scholar]
  67. Gelfand I, Gelfand S, Retakh V, Wilson RL (2005), Quasideterminants. Adv Math 193, 56–141. [CrossRef] [Google Scholar]
  68. Moore EH (1922), On the determinant of an Hermitian matrix of quaternionic elements. Bull Am Math Soc 28, 161–162. [Google Scholar]
  69. Study E (1920), Zur Theorie der linearen Gleichungen. Acta Math 42, 1–61. [CrossRef] [Google Scholar]
  70. Fan J (2003), Determinants and multiplicative functionals on quaternion matrices. Linear Algebra Appl 369, 193–201. [Google Scholar]
  71. Wang QW, Li CK (2009), Ranks and the least-norm of the general solution to a system of quaternion matrix equations. Linear Algebra Appl 430, 1626–1640. [Google Scholar]

Cite this article as: Kyrchei II 2019. Cramer’s rules for the system of quaternion matrix equations with η-Hermicity. 4open, 2, 24.

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.