Preprint
Article

The BiCG Algorithm for Solving the Minimal Frobenius Norm Solution of Generalized Sylvester Tensor Equation over the Quaternions

Altmetrics

Downloads

64

Views

32

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

05 August 2024

Posted:

14 August 2024

You are already at the latest version

Alerts
Abstract
In this paper, we develop an effective iterative algorithm to solve a generalized Sylvester tensor equation over quaternions which includes several well-studied matrix/tensor equations as special cases. We discuss the convergence of this algorithm within a finite number of iterations, assuming negligible round-off errors for any initial tensor. Moreover, we demonstrate the unique minimal Frobenius norm solution achievable by selecting specific types of initial tensors. Additionally, numerical examples are presented to illustrate the practicality and validity of our proposed algorithm. These examples include demonstrating the algorithm’s effectiveness in addressing three-dimensional microscopic heat transport and color video restoration problems.
Keywords: 
Subject: Physical Sciences  -   Mathematical Physics

1. Introduction

An order N tensor A = a i 1 i N 1 i j I j ( j = 1 , , N ) over a field F is a multidimensional array with I 1 I 2 I N entries in F , where N is a positive integer ([27,44]). The set of all such N tensors is denoted by F I 1 × × I N . In the past decades, there have been active researches on tensors due to various applications in many areas such as physics, computer vision, data mining, etc. (see, e.g., [9,11,15,16,17,21,25,26,27,29,30,31,32,33,35,36,37,41,42,43,54,56]).
In this paper, we investigate the solvability of some tensor equations over quaternions. This is motivated by the recent researches on tensor equations as well as a long research history of solving matrix equations, which are briefly outlined as follows. It is well-known that the following Sylvester matrix equation
A X + Y B = C
and its generalized forms have been widely investigated and found numerous applications in many areas.
During the past decades, many methods have been developed for solving Sylvester-type matrix equation over the quaternion algebra. For example, Kyrchei [28] gave explicit determinantal representation formulas of solutions to the Equation (1). Heyouni et al. [20] presented the SGl-CMRH method and preconditioned framework of this method to solve matrix Equation (1) when X = Y . Zhang [53] investigated the general system of generalized Sylvester quaternion matrix equations. Ahmadi-Asl and Beik [1,2,7] presented effective iterative algorithms for solving different quaternion matrix equations. Song [47] investigated the general solution to a system of quaternion matrix equation by Cramer’s rule. Wang et al. [48] considered the solvability of a system of constrained two-sided coupled generalized Sylvester quaternion matrix equations. Zhang et al. [52] obtained some special least squares solutions of the quaternion matrix equation A X B + C X D = E . Huang et al. [22] considered the modified conjugate gradient method to solve the generalized coupled Sylvester conjugate matrix equations.
Quaternions offer greater versatility and flexibility compared to real and complex numbers, particularly in addressing multidimensional problems. This unique property has attracted growing interest among scholars, leading to numerous valuable achievements in quaternion-related research (see, e.g., [13,19,23,34,38,50,51]). The tensor equation is a natural extension of the matrix equation.
In this paper, we consider the following generalized Sylvester tensor equation over H :
X ×   1 A ( 1 ) + X ×   2 A ( 2 ) + + X × N A ( N ) + Y ×   1 B ( 1 ) + Y ×   2 B ( 2 ) + + Y × N B ( N ) = C ,
where the tensors A ( n ) , B ( n ) H I n × I n ( n = 1 , 2 , , N ) , C H I 1 × × I N are given, and the tensors X , Y H I 1 × × I N are unknown. The n-mode product of a tensor X H I 1 × × I N with a matrix A H I n × I n is defined as
X ×   n A i 1 i n 1 j i n + 1 i N = i n = 1 I n a j i n x i 1 i n 1 i n i n + 1 i N .
Note that the n-mode product can also be expressed in terms of unfolded quaternion tensors:
Y = X × n A Y [ n ] = A X [ n ] ,
where X [ n ] is the mode-n unfolding of X ([27]). We consider the following problems corresponding to (2):
Problem 1.1: For given the tensor C H I 1 × I 2 × × I N , the matrices A ( n ) , B ( n ) H I n × I n ( n = 1 , 2 , , N ) , find the tensors X ˜ , Y ˜ H I 1 × I 2 × × I N such that
k = 1 N X ˜ ×   k A ( k ) + Y ˜ ×   k B ( k ) C = min X k = 1 N X × k A ( k ) + Y × k B ( k ) C .
Problem 1.2: Let the solution set of Problem 1.1 be denoted by S X Y . For a given tensor X 0 , Y 0 H I 1 × I 2 × × I N , find the tensor X ˇ , Y ˇ H I 1 × I 2 × × I N such that
X ˇ X 0 + Y ˇ Y 0 = min X S L X X 0 + Y Y 0 .
It is worth emphasizing that the tensor equation (2) includes several well-studied matrix/tensor equations as special cases. For example, if X and Y in (2) are order 2 tensors, i.e., matrices, then Equation (2) can be reduced to the following extended Sylvester matrix equation
A ( 1 ) X + X ( A ( 2 ) ) T + B ( 1 ) Y + Y ( B ( 2 ) ) T = C .
In the case of B ( n ) = 0 ( n = 1 , 2 , N ) , Equation (2) becomes the following equation
X × 1 A ( 1 ) + X ×   2 A ( 2 ) + + X × N A ( N ) = C ,
which has been studied extensively in recent years. For instance, Saberi et al. [45,46] investigated the SGMRES-BTF method and SGCRO-BTF method to solve Equation (3) over R . Wang et al. [49] gave the conjugate gradient least squares method to solve Equation (3) over H . Zhang and Wang [55] presented the tensor forms of the bi-conjugate gradient (BiCG-BTF) and bi-conjugate residual (BiCR-BTF) methods for solving tensor Equation (3) over real number field R . Chen and Lu [10] considered a projection method with a Kronecker product preconditioner to solve Equation (3) over R . Karimi and Dehghan [24] presented the tensor form of global least squares method for finding an approximate solution of (3). Furthermore, Najafi-Kalyani et al. [39] derived some iterative algorithms on global Hessenberg process in their tensor forms to solve Equation (3). Considering Equation (3) over R and N = 3 , that is,
X × 1 A ( 1 ) + X ×   2 A ( 2 ) + X × 3 A ( 3 ) = C .
It has been shown that Equation (4) plays an important role in finite difference [3], thermal radiation [29], information retrieval [33], finite elements [14], and microscopic heat transport problem [37]. Therefore, our study on Equation (2) will provide a unified treatment for these matrix/tensor equations.
The rest of this paper is organized as follows. In Section 2, we recall some definitions and notations, and prove several lemmas for changing Equation (2). In Section 3, we develop the BiCG iterative algorithm for solving the quaternion tensor Equation (2), and prove that our algorithm is correct. We also show that the minimal Frobenious norm solution can be obtained by choosing special kinds of initial tensors. Some numerical examples are presented in Section 4 for illustrating the efficiency and applications of the proposed algorithm. Finally, we summarize our contributions in Section 5.

2. Preliminaries

We first recall some notations and definitions. For two complex matrices U = u i j C m × n , V = v i j C p × q , the symbol U V = u i j V C m p × n q denotes the Kronecker product of U and V.
The operator v e c ( · ) is defined as: for a matrix A and a tensor X ,
vec ( A ) = a 1 T , a 2 T , , a n T T , and vec ( X ) = vec ( X [ 1 ] ) ,
respectively, where a k is the kth column of A and X [ 1 ] is the mode-1 unfolding of the tensor X . The inner product of two tensors X , Y H I 1 × × I N is defined by
X , Y = i 1 = 1 I 1 i 2 = 1 I 2 i N = 1 I N x i 1 i 2 i N y ¯ i 1 i 2 i N ,
where y ¯ i 1 i 2 i N represents the quaternion conjugate of y i 1 i 2 i N . If X , Y = 0 , then we say that tensors X and Y are orthogonal. The Frobenius norm of tensor X is defined as X = X , X .
For any X H I 1 × × I N , it is well-known that X can be uniquely expressed as X = X 1 + X 2 i + X 3 j + X 4 k , where X i R I 1 × × I N , i = 1 , 2 , 3 , 4 . Next we define n-mode operators for X i .
Let A ( n ) = A 1 ( n ) + A 2 ( n ) i + A 3 ( n ) j + A 4 ( n ) k , B ( n ) = B 1 ( n ) + B 2 ( n ) i + B 3 ( n ) j + B 4 ( n ) k H I n × I n , where A i ( n ) , B i ( n ) R I n × I n , i = 1 , 2 , 3 , 4 . For W R I 1 × × I N , we define
L A i ( n ) ( W ) = W ×   1 A i ( 1 ) + W × 2 A i ( 2 ) + + W × N A i ( N ) , i = 1 , 2 , 3 , 4 , L B i ( n ) ( W ) = W ×   1 B i ( 1 ) + W × 2 B i ( 2 ) + + W × N B i ( N ) , i = 1 , 2 , 3 , 4 .
Next, replacing W in above equations by X i ’s, we define the following notations:
Γ 1 [ X 1 , X 2 , X 3 , X 4 ] = L A 1 ( n ) X 1 L A 2 ( n ) X 2 L A 3 ( n ) X 3 L A 4 ( n ) X 4 , Γ 2 [ X 1 , X 2 , X 3 , X 4 ] = L A 2 ( n ) X 1 + L A 1 ( n ) X 2 + L A 4 ( n ) X 3 L A 3 ( n ) X 4 , Γ 3 [ X 1 , X 2 , X 3 , X 4 ] = L A 3 ( n ) X 1 L A 4 ( n ) X 2 + L A 1 ( n ) X 3 + L A 2 ( n ) X 4 , Γ 4 [ X 1 , X 2 , X 3 , X 4 ] = L A 4 ( n ) X 1 + L A 3 ( n ) X 2 L A 2 ( n ) X 3 + L A 1 ( n ) X 4 ,
Φ 1 [ X 1 , X 2 , X 3 , X 4 ] = L B 1 ( n ) X 1 L B 2 ( n ) X 2 L B 3 ( n ) X 3 L B 4 ( n ) X 4 , Φ 2 [ X 1 , X 2 , X 3 , X 4 ] = L B 2 ( n ) X 1 + L B 1 ( n ) X 2 + L B 4 ( n ) X 3 L B 3 ( n ) X 4 , Φ 3 [ X 1 , X 2 , X 3 , X 4 ] = L B 3 ( n ) X 1 L B 4 ( n ) X 2 + L B 1 ( n ) X 3 + L B 2 ( n ) X 4 , Φ 4 [ X 1 , X 2 , X 3 , X 4 ] = L B 4 ( n ) X 1 + L B 3 ( n ) X 2 L B 2 ( n ) X 3 + L B 1 ( n ) X 4 ,
The following lemma shows that the quaternion tensor Equation (2) is equivalent to a system of four real tensor equations.
Lemma 1.
In Equation (2), we assume that A ( n ) = A 1 ( n ) + A 2 ( n ) i + A 3 ( n ) j + A 4 ( n ) k , B ( n ) = B 1 ( n ) + B 2 ( n ) i + B 3 ( n ) j + B 4 ( n ) k H I n × I n , n = 1 , 2 , , N , and C =   C 1 + C 2 i + C 3 j + C 4 k , X = X 1 + X 2 i + X 3 j + X 4 k , Y = Y 1 + Y 2 i + Y 3 j + Y 4 k H I 1 × × I N . Then the quaternion Sylvester tensor Equation (2) is equivalent to the following system of real tensor equations
Γ 1 [ X 1 , X 2 , X 3 , X 4 ] + Φ 1 [ Y 1 , Y 2 , Y 3 , Y 4 ] = C 1 , Γ 2 [ X 1 , X 2 , X 3 , X 4 ] + Φ 2 [ Y 1 , Y 2 , Y 3 , Y 4 ] = C 2 , Γ 3 [ X 1 , X 2 , X 3 , X 4 ] + Φ 3 [ Y 1 , Y 2 , Y 3 , Y 4 ] = C 3 , Γ 4 [ X 1 , X 2 , X 3 , X 4 ] + Φ 4 [ Y 1 , Y 2 , Y 3 , Y 4 ] = C 4 ,
where Γ i [ X 1 , X 2 , X 3 , X 4 ] , Φ i [ Y 1 , Y 2 , Y 3 , Y 4 ] ( i = 1 , 2 , 3 , 4 ) are defined by (5) and (6). Furthermore, the system of real tensor Equations (7) is equivalent to the following linear system
[ M A , M B ] z = c ,
where
M A = Kro L A 1 ( n ) Kro L A 2 ( n ) Kro L A 3 ( n ) Kro L A 4 ( n ) Kro L A 2 ( n ) + Kro L A 1 ( n ) + Kro L A 4 ( n ) Kro L A 3 ( n ) Kro L A 3 ( n ) Kro L A 4 ( n ) + Kro L A 1 ( n ) + Kro L A 2 ( n ) Kro L A 4 ( n ) + Kro L A 3 ( n ) Kro L A 2 ( n ) + Kro L A 1 ( n ) ,
M B = Kro L B 1 ( n ) Kro L B 2 ( n ) Kro L B 3 ( n ) Kro L B 4 ( n ) Kro L B 2 ( n ) + Kro L B 1 ( n ) + Kro L B 4 ( n ) Kro L B 3 ( n ) Kro L B 3 ( n ) Kro L B 4 ( n ) + Kro L B 1 ( n ) + Kro L B 2 ( n ) Kro L B 4 ( n ) + Kro L B 3 ( n ) Kro L B 2 ( n ) + Kro L B 1 ( n ) ,
z = vec X 1 vec X 2 vec X 3 vec X 4 vec Y 1 vec Y 2 vec Y 3 vec Y 4 , c = vec C 1 vec C 2 vec C 3 vec C 4 ,
Kro L A i ( n ) = n = 1 N I I N I I n + 1 A i ( n ) I I n 1 I I 1 , i = 1 , 2 , 3 , 4 , Kro L B i ( n ) = n = 1 N I I N I I n + 1 B i ( n ) I I n 1 I I 1 , i = 1 , 2 , 3 , 4 ,
and I ( n ) stands for the identity matrix of order n.
Proof of Lemma 1.
We apply the definition of n-mode product of the quaternion tensor for (2).
n = 1 N X × n A ( n ) + Y × n B ( n ) = n = 1 N X 1 + X 2 i + X 3 j + X 4 k × n A 1 ( n ) + A 2 ( n ) i + A 3 ( n ) j + A 4 ( n ) k + Y 1 + Y 2 i + Y 3 j + Y 4 k × n B 1 ( n ) + B 2 ( n ) i + B 3 ( n ) j + B 4 ( n ) k = n = 1 N X 1 × n A 1 ( n ) X 2 × n A 2 ( n ) X 3 × n A 3 ( n ) X 4 × n A 4 ( n ) + Y 1 × n B 1 ( n ) Y 2 × n B 2 ( n ) Y 3 × n B 3 ( n ) Y 4 × n B 4 ( n ) + n = 1 N X 1 × n A 2 ( n ) + X 2 × n A 1 ( n ) + X 3 × n A 4 ( n ) X 4 × n A 3 ( n ) + Y 1 × n B 2 ( n ) + Y 2 × n B 1 ( n ) + Y 3 × n B 4 ( n ) Y 4 × n B 3 ( n ) i + n = 1 N X 1 × n A 3 ( n ) X 2 × n A 4 ( n ) + X 3 × n A 1 ( n ) + X 4 × n A 2 ( n ) + Y 1 × n B 3 ( n ) Y 2 × n B 4 ( n ) + Y 3 × n B 1 ( n ) + Y 4 × n B 2 ( n ) j + n = 1 N X 1 × n A 4 ( n ) + X 2 × n A 3 ( n ) X 3 × n A 2 ( n ) + X 4 × n A 1 ( n ) + Y 1 × n B 4 ( n ) + Y 2 × n B 3 ( n ) Y 3 × n B 2 ( n ) + Y 4 × n B 1 ( n ) k = C 1 + C 2 i + C 3 j + C 4 k .
By the definitions of Γ i and Φ i , the Equations (7) hold. To show (8), we make use of operator “vec” to Γ 1 [ X 1 , X 2 , X 3 , X 4 ] and Φ 1 [ Y 1 , Y 2 , Y 3 , Y 4 ] , that is,
vec Γ 1 [ X 1 , X 2 , X 3 , X 4 ] = vec L A 1 ( n ) X 1 L A 2 ( n ) X 2 L A 3 ( n ) X 3 L A 4 ( n ) X 4 = vec L A 1 ( n ) X 1 vec L A 2 ( n ) X 2 vec L A 3 ( n ) X 3 vec L A 4 ( n ) X 4 = Kro L A 1 ( n ) vec ( X 1 ) Kro L A 2 ( n ) vec ( X 2 ) Kro L A 3 ( n ) vec ( X 3 ) Kro L A 4 ( n ) vec ( X 4 ) ,
vec Φ 1 [ Y 1 , Y 2 , Y 3 , Y 4 ] = vec L B 1 ( n ) Y 1 L B 2 ( n ) Y 2 L B 3 ( n ) Y 3 L B 4 ( n ) Y 4 = vec L B 1 ( n ) Y 1 vec L B 2 ( n ) Y 2 vec L B 3 ( n ) Y 3 vec L B 4 ( n ) Y 4 = Kro L B 1 ( n ) vec ( Y 1 ) Kro L B 2 ( n ) vec ( Y 2 ) Kro L B 3 ( n ) vec ( Y 3 ) Kro L B 4 ( n ) vec ( Y 4 ) .
Similarly, we have the following results for the rest of Γ i ’s and Φ i ’s:
vec Γ 2 [ X 1 , X 2 , X 3 , X 4 ] = Kro L A 2 ( n ) vec ( X 1 ) + Kro L A 1 ( n ) vec ( X 2 ) + Kro L A 4 ( n ) vec ( X 3 ) Kro L A 3 ( n ) vec ( X 4 ) ,
vec Φ 2 [ Y 1 , Y 2 , Y 3 , Y 4 ] = Kro L B 2 ( n ) vec ( Y 1 ) + Kro L B 1 ( n ) vec ( Y 2 ) + Kro L B 4 ( n ) vec ( Y 3 ) Kro L B 3 ( n ) vec ( Y 4 ) ,
vec Γ 3 [ X 1 , X 2 , X 3 , X 4 ] = Kro L A 3 ( n ) vec ( X 1 ) Kro L A 4 ( n ) vec ( X 2 ) + Kro L A 1 ( n ) vec ( X 3 ) + Kro L A 2 ( n ) vec ( X 4 ) ,
vec Φ 3 [ Y 1 , Y 2 , Y 3 , Y 4 ] = Kro L B 3 ( n ) vec ( Y 1 ) Kro L B 4 ( n ) vec ( Y 2 ) + Kro L B 1 ( n ) vec ( Y 3 ) + Kro L B 2 ( n ) vec ( Y 4 ) ,
vec Γ 4 [ X 1 , X 2 , X 3 , X 4 ] = Kro L A 4 ( n ) vec ( X 1 ) + Kro L A 3 ( n ) vec ( X 2 ) Kro L A 2 ( n ) vec ( X 3 ) + Kro L A 1 ( n ) vec ( X 4 ) ,
vec Φ 4 [ Y 1 , Y 2 , Y 3 , Y 4 ] = Kro L B 4 ( n ) vec ( Y 1 ) + Kro L B 3 ( n ) vec ( Y 2 ) Kro L B 2 ( n ) vec ( Y 3 ) + Kro L B 1 ( n ) vec ( Y 4 ) ,
By writing up above equations, we obtain the system (8). □
Lemma 2.
[21,40] Suppose that A R m × n , b R m and the linear matrix equation A x = b has a solution x ˜ R A , then, x ˜ is the unique minimum norm solution of A x = b .
From Lemma 1 and 2, it is easy to see that the uniqueness of the solution of equation (2) can be described as follows:
Theorem 1.
The tensor Equation (2) has the unique minimal Frobenius norm solution if and only if the matrix Equation (8) has a solution z ˜ R [ M A , M B ] , then, z ˜ is the unique minimum norm solution of matrix Equation (8).
Given fixed matrices A ( n ) R I n × I n , n = 1 , 2 , , N , we define the following linear operators
L A ( n ) ( X ) = X ×   1 A ( 1 ) + X ×   2 A ( 2 ) + + X × N A ( N ) , f o r a n y X R I 1 × I 2 × × I N .
Using the property X , Y × n A ( n ) = X × n ( A ( n ) ) T , Y in [26], it is easy to prove the following lemma.
Lemma 3.
Let A ( n ) R I n × I n , n = 1 , 2 , , N , X , Y R I 1 × I 2 × × I N . Then
L A ( n ) ( X ) , Y = X , L A ( n ) * ( Y ) ,
where
L A ( n ) * ( Y ) = Y × 1 A ( 1 ) T + Y ×   2 A ( 2 ) T + + Y ×   N A ( N ) T .
Clearly, L A ( n ) defined above is a linear mapping. The following lemma provides the uniqueness of the dual mapping for these kinds of linear mappings.
Lemma 4.
([49]) Let N be a linear mapping from tensor space R I 1 × × I N to tensor space R J 1 × × J N . For any tensors X R I 1 × × I N and Y R J 1 × × J N , there exists a unique linear mapping M from tensor space R J 1 × × J N to tensor space R I 1 × × I N such that
N ( X ) , Y = X , M ( Y ) .
Finally, we use linear operators L and L * to describe the inner products involving Γ i and Φ i , which we will use in next sections.
Lemma 5.
Let Γ i [ Z 1 , Z 2 , Z 3 , Z 4 ] , Φ i [ Z 1 , Z 2 , Z 3 , Z 4 ] ( i = 1 , 2 , 3 , 4 ) be defined by (5) and (6), W i R I 1 × × I N ( i = 1 , 2 , 3 , 4 ) . Then
i = 1 4 Γ i [ Z 1 , Z 2 , Z 3 , Z 4 ] , W i = i = 1 4 Z i , Γ i * [ W 1 , W 2 , W 3 , W 4 ] , i = 1 4 Φ i [ Z 1 , Z 2 , Z 3 , Z 4 ] , W i = i = 1 4 Z i , Φ i * [ W 1 , W 2 , W 3 , W 4 ] ,
where
Γ 1 * [ Z 1 , Z 2 , Z 3 , Z 4 ] = L A 1 ( n ) * Z 1 + L A 2 ( n ) * Z 2 + L A 3 ( n ) * Z 3 + L A 4 ( n ) * Z 4 , Γ 2 * [ Z 1 , Z 2 , Z 3 , Z 4 ] = L A 2 ( n ) * Z 1 + L A 1 ( n ) * Z 2 L A 4 ( n ) * Z 3 + L A 3 ( n ) * Z 4 , Γ 3 * [ Z 1 , Z 2 , Z 3 , Z 4 ] = L A 3 ( n ) * Z 1 + L A 4 ( n ) * Z 2 + L A 1 ( n ) * Z 3 L A 2 ( n ) * Z 4 , Γ 4 * [ Z 1 , Z 2 , Z 3 , Z 4 ] = L A 4 ( n ) * Z 1 L A 3 ( n ) * Z 2 + L A 2 ( n ) * Z 3 + L A 1 ( n ) * Z 4 ,
Φ 1 * [ Z 1 , Z 2 , Z 3 , Z 4 ] = L B 1 ( n ) * Z 1 + L B 2 ( n ) * Z 2 + L B 3 ( n ) * Z 3 + L B 4 ( n ) * Z 4 , Φ 2 * [ Z 1 , Z 2 , Z 3 , Z 4 ] = L B 2 ( n ) * Z 1 + L B 1 ( n ) * Z 2 L B 4 ( n ) * Z 3 + L B 3 ( n ) * Z 4 , Φ 3 * [ Z 1 , Z 2 , Z 3 , Z 4 ] = L B 3 ( n ) * Z 1 + L B 4 ( n ) * Z 2 + L B 1 ( n ) * Z 3 L B 2 ( n ) * Z 4 , Φ 4 * [ Z 1 , Z 2 , Z 3 , Z 4 ] = L B 4 ( n ) * Z 1 L B 3 ( n ) * Z 2 + L B 2 ( n ) * Z 3 + L B 1 ( n ) * Z 4 ,
L A i ( n ) * ( X ) = X ×   1 ( A i ( 1 ) ) T + X × 2 ( A i ( 2 ) ) T + + X × N ( A i ( N ) ) T , i = 1 , 2 , 3 , 4 , L B i ( n ) * ( X ) = X ×   1 ( B i ( 1 ) ) T + X × 2 ( B i ( 2 ) ) T + + X × N ( B i ( N ) ) T , i = 1 , 2 , 3 , 4 .
Proof of Lemma 5.
For the first part of the equalities, we divide i = 1 4 Γ i [ Z 1 , Z 2 , Z 3 , Z 4 ] , W i into 4 parts by i, and then apply Lemma 3 to each part, that is,
Γ 1 [ Z 1 , Z 2 , Z 3 , Z 4 ] , W 1 = L A 1 ( n ) Z 1 , W 1 L A 2 ( n ) Z 2 , W 1 L A 3 ( n ) Z 3 , W 1 L A 4 ( n ) Z 4 , W 1 = Z 1 , L A 1 ( n ) * W 1 Z 2 , L A 2 ( n ) * W 1 Z 3 , L A 3 ( n ) * W 1 Z 4 , L A 4 ( n ) * W 1 ,
Γ 2 [ Z 1 , Z 2 , Z 3 , Z 4 ] , W 2 = L A 2 ( n ) Z 1 , W 2 + L A 1 ( n ) Z 2 , W 2 + L A 4 ( n ) Z 3 , W 2 L A 3 ( n ) Z 4 , W 2 = Z 1 , L A 2 ( n ) * W 2 + Z 2 , L A 1 ( n ) * W 1 + Z 3 , L A 3 ( n ) * W 2 Z 4 , L A 4 ( n ) * W 2 ,
Γ 3 [ Z 1 , Z 2 , Z 3 , Z 4 ] , W 3 = L A 3 ( n ) Z 1 , W 3 L A 4 ( n ) Z 2 , W 3 + L A 1 ( n ) Z 3 , W 3 + L A 2 ( n ) Z 4 , W 3 = Z 1 , L A 3 ( n ) * W 3 Z 2 , L A 4 ( n ) * W 3 + Z 3 , L A 1 ( n ) * W 2 + Z 4 , L A 2 ( n ) * W 3 ,
Γ 4 [ Z 1 , Z 2 , Z 3 , Z 4 ] , W 4 = L A 4 ( n ) Z 1 , W 4 + L A 3 ( n ) Z 2 , W 4 L A 2 ( n ) Z 3 , W 4 + L A 1 ( n ) Z 4 , W 4 = Z 1 , L A 4 ( n ) * W 4 + Z 2 , L A 3 ( n ) * W 4 Z 3 , L A 2 ( n ) * W 4 + Z 4 , L A 1 ( n ) * W 4 .
By adding up the above four parts, we have
i = 1 4 Γ i [ Z 1 , Z 2 , Z 3 , Z 4 ] , W i = i = 1 4 Z i , Γ i * [ W 1 , W 2 , W 3 , W 4 ] ,
where Γ i * [ W 1 , W 2 , W 3 , W 4 ] ’s are defined by (10). Using a similar process, we can get the second equality
i = 1 4 Φ i [ Z 1 , Z 2 , Z 3 , Z 4 ] , W i = i = 1 4 Z i , Φ i * [ W 1 , W 2 , W 3 , W 4 ] ,
where Φ i * [ W 1 , W 2 , W 3 , W 4 ] ’s are defined by (11). □

3. An Iterative Algorithm For Solving The Problem 1.1 And 1.2

The purpose of this section is to propose an iterative algorithm for obtaining the solution of Sylvester tensor Equation (2). As well known that the classical bi-conjugate gradient (BiCG) methods for solving nonsymmetric linear system of equations are feasible and efficient, one may refer to [5,6,12,18,55]. We extend the BiCG method based on tensor format (BTF) for solving Equation (2) and discuss its convergence. Clearly, the tensor Equation (2) and (8) have the same solution from Lemma 1. However, the size of [ M A , M B ] in Equation (8) is usually too large to save computation time and memory space. Beik et al. [8] showed that the algorithms based on tensor format are more efficient than their classical forms in general. Inspired by these issues, we develop the following least squares algorithm based on tensor format for solving tensor Equation (2):
Preprints 114338 i001
Note that Γ i , Φ i and Γ i * , Φ i * are defined by (5), (6) and (10), (11), respectively. Next, we discuss some bi-orthogonality properties of Algorithm 1.
Theorem 2.
Assume that iterative sequences { R i ( k ) } , { R i * ( k ) } , { P i ( k ) } , { P i * ( k ) } { Q i x ( k ) } and { Q i y ( k ) } ( i = 1 , 2 , 3 , 4 ) are generated by Algorithm 1. Then we have
i = 1 4 R i ( l ) , R i * ( m ) = 0 , l m ,
i = 1 4 Q i x ( l ) + Q i y ( l ) , P i * ( m ) = 0 , l m ,
i = 1 4 R i ( l ) , P i * ( m ) = 0 , l > m .
Proof of Theorem 2.
We apply mathematics induction on k. Let’s consider 1 m < l k first.
When k = 2 . The conclusion holds as the following calculations show:
i = 1 4 R i ( 2 ) , R i * ( 1 ) = i = 1 4 R i ( 1 ) α 1 Q i x ( 1 ) + Q i y ( 1 ) , R i * ( 1 ) = i = 1 4 R i ( 1 ) , R i * ( 1 ) i = 1 4 R i ( 1 ) , R i * ( 1 ) i = 1 4 P i * ( 1 ) , Q i x ( 1 ) + Q i y ( 1 ) i = 1 4 P i * ( 1 ) , Q i x ( 1 ) + Q i y ( 1 ) = 0 ,
and
i = 1 4 Q i x ( 2 ) + Q i y ( 2 ) , P i * ( 1 ) = i = 1 4 Γ i [ P 1 ( 2 ) , P 2 ( 2 ) , P 3 ( 2 ) , P 4 ( 2 ) ] , P i * ( 1 ) + i = 1 4 Φ i [ P 1 ( 2 ) , P 2 ( 2 ) , P 3 ( 2 ) , P 4 ( 2 ) ] , P i * ( 1 ) = i = 1 4 P i ( 2 ) , Γ i * [ P 1 * ( 1 ) , P 2 * ( 1 ) , P 3 * ( 1 ) , P 4 * ( 1 ) ] + i = 1 4 P i ( 2 ) , Φ i * [ P 1 * ( 1 ) , P 2 * ( 1 ) , P 3 * ( 1 ) , P 4 * ( 1 ) ] = i = 1 4 R i ( 2 ) , Q i x * ( 1 ) + β ( 1 ) P i ( 1 ) , Q i x * ( 1 ) + i = 1 4 R i ( 2 ) , Q i y * ( 1 ) + β ( 1 ) P i ( 1 ) , Q i y * ( 1 ) = i = 1 4 R i ( 2 ) , Q i x * ( 1 ) + i = 1 4 R i ( 2 ) , R i * ( 1 ) α ( 1 ) ( Q i x * ( 1 ) + Q i y * ( 1 ) ) i = 1 4 R i ( 1 ) , R i * ( 1 ) P i ( 1 ) , Q i x * ( 1 )
+ i = 1 4 R i ( 2 ) , Q i y * ( 1 ) + i = 1 4 R i ( 2 ) , R i * ( 1 ) α ( 1 ) ( Q i x * ( 1 ) + Q i y * ( 1 ) ) i = 1 4 R i ( 1 ) , R i * ( 1 ) P i ( 1 ) , Q i y * ( 1 ) = i = 1 4 R i ( 2 ) , Q i x * ( 1 ) i = 1 4 R i ( 2 ) , Q i x * ( 1 ) + Q i y * ( 1 ) i = 1 4 P i * ( 1 ) , Q i x ( 1 ) + Q i y ( 1 ) P i ( 1 ) , Q i x * ( 1 ) + i = 1 4 R i ( 2 ) , Q i y * ( 1 ) i = 1 4 R i ( 2 ) , Q i x * ( 1 ) + Q i y * ( 1 ) i = 1 4 P i * ( 1 ) , Q i x ( 1 ) + Q i y ( 1 ) P i ( 1 ) , Q i y * ( 1 ) = i = 1 4 R i ( 2 ) , Q i x * ( 1 ) + Q i y * ( 1 ) i = 1 4 R i ( 2 ) , Q i x * ( 1 ) + Q i y * ( 1 ) i = 1 4 P i * ( 1 ) , Q i x ( 1 ) + Q i y ( 1 ) i = 1 4 P i * ( 1 ) , Q i x ( 1 ) + Q i y ( 1 ) = 0 .
It has been found i = 1 4 R i ( 2 ) , P i * ( 1 ) = 0 clearly. Now, assume that (12) and (13) hold for 1 m < l k ( k > 2 ). Then
i = 1 4 R i ( k + 1 ) , P i * ( m ) = i = 1 4 R i ( k ) α ( k ) Q i x ( k ) + Q i y ( k ) , P i * ( m ) = i = 1 4 R i ( k ) , P i * ( m ) α ( k ) Q i x ( k ) + Q i y ( k ) , P i * ( m ) = 0 ,
and
i = 1 4 R i ( k + 1 ) , P i * ( k ) = i = 1 4 R i ( k ) α ( k ) Q i x ( k ) + Q i y ( k ) , P i * ( k ) = i = 1 4 R i ( k ) , P i * ( k ) α ( k ) Q i x ( k ) + Q i y ( k ) , P i * ( k ) = i = 1 4 R i ( k ) , R i * ( k ) + β ( k 1 ) R i ( k ) , P i * ( k 1 ) i = 1 4 R i ( k ) , R i * ( k ) i = 1 4 P i * ( k ) , Q i x ( k ) + Q i y ( k ) i = 1 4 P i * ( k ) , Q i x ( k ) + Q i y ( k ) = 0 .
The equality (14) holds for all l > m . Next, we will prove that the equalities (12) and (13) hold for all l > m .
i = 1 4 R i ( k + 1 ) , R i * ( m ) = i = 1 4 R i ( k + 1 ) , P i * ( m ) β ( m 1 ) P i * ( m 1 ) = i = 1 4 R i ( k + 1 ) , P i * ( m ) β ( m 1 ) R i ( k + 1 ) , P i * ( m 1 ) = 0 ,
i = 1 4 R i ( k + 1 ) , R i * ( k ) = i = 1 4 R i ( k + 1 ) , P i * ( m ) β ( m 1 ) P i * ( m 1 ) = i = 1 4 R i ( k + 1 ) , P i * ( m ) β ( m 1 ) R i ( k + 1 ) , P i * ( m 1 ) = 0 ,
and
i = 1 4 Q i x ( k + 1 ) + Q i y ( k + 1 ) , P i * ( m ) = i = 1 4 Q i x ( k + 1 ) , P i * ( m ) + i = 1 4 Q i y ( k + 1 ) , P i * ( m ) = i = 1 4 R i ( k + 1 ) , Γ i * [ P 1 * ( m ) , P 2 * ( m ) , P 3 * ( m ) , P 4 * ( m ) ] + β ( k ) P i ( k ) , Γ i * [ P 1 * ( m ) , P 2 * ( m ) , P 3 * ( m ) , P 4 * ( m ) ] + i = 1 4 R i ( k + 1 ) , Φ i * [ P 1 * ( m ) , P 2 * ( m ) , P 3 * ( m ) , P 4 * ( m ) ] + β ( k ) P i ( k ) , Φ i * [ P 1 * ( m ) , P 2 * ( m ) , P 3 * ( m ) , P 4 * ( m ) ] = i = 1 4 R i ( k + 1 ) , Q i x * ( m ) + β ( k ) P i ( k ) , Q i x * ( m ) + R i ( k + 1 ) , Q i y * ( m ) + β ( k ) P i ( k ) , Q i y * ( m ) = i = 1 4 R i ( k + 1 ) , 1 α ( m ) ( R i * ( m + 1 ) R i ( m ) ) + β ( k ) P i ( k ) , 1 α ( m ) ( R i * ( m + 1 ) R i ( m ) ) = 0 ,
i = 1 4 Q i x ( k + 1 ) + Q i y ( k + 1 ) , P i * ( k ) = i = 1 4 Q i x ( k + 1 ) , P i * ( k ) + i = 1 4 Q i y ( k + 1 ) , P i * ( k ) = i = 1 4 R i ( k + 1 ) , Γ i * [ P 1 * ( k ) , P 2 * ( k ) , P 3 * ( k ) , P 4 * ( k ) ] + β ( k ) P i ( k ) , Γ i * [ P 1 * ( k ) , P 2 * ( k ) , P 3 * ( k ) , P 4 * ( k ) ] + i = 1 4 R i ( k + 1 ) , Φ i * [ P 1 * ( k ) , P 2 * ( k ) , P 3 * ( k ) , P 4 * ( k ) ]
+ β ( k ) P i ( k ) , Φ i * [ P 1 * ( k ) , P 2 * ( k ) , P 3 * ( k ) , P 4 * ( k ) ] = i = 1 4 R i ( k + 1 ) , Q i x * ( k ) + β ( k ) P i ( k ) , Q i x * ( k ) + R i ( k + 1 ) , Q i y * ( k ) + β ( k ) P i ( k ) , Q i y * ( k ) = i = 1 4 R i ( k + 1 ) , Q i x * ( k ) + Q i y * ( k ) i = 1 4 R i ( k + 1 ) , Q i x * ( k ) + Q i y * ( k ) i = 1 4 P i ( k ) , Q i x * ( k ) + Q i y * ( k ) + P i ( k ) , Q i x * ( k ) + Q i y * ( k ) = 0 .
Similarly, the equalities (12) and (13) also hold for the case 1 m < l k . Therefore, the facts illustrate that (12) and (13) are satisfied for l m . □
Corollary 1.
Assume the conditions in Theorem 2 are satisfied. Then
i = 1 4 R i ( k ) , P i * ( k ) = i = 1 4 R i ( k ) , R i * ( k ) ,
i = 1 4 Q i x ( k ) + Q i y ( k ) , R i * ( k ) = i = 1 4 Q i x ( k ) + Q i y ( k ) , P i * ( k ) .
Proof of Corollary 1.
From Algorithm 1 and Theorem 2, we have
i = 1 4 R i ( k ) , P i * ( k ) = i = 1 4 R i ( k ) , R i * ( k ) + β ( k 1 ) P i * ( k 1 ) = i = 1 4 R i ( k ) , R i * ( k ) ,
and
i = 1 4 Q i x ( k ) + Q i y ( k ) , R i * ( k ) = i = 1 4 Q i x ( k ) + Q i y ( k ) , P i * ( k ) β ( k 1 ) P i * ( k 1 ) = i = 1 4 Q i x ( k ) + Q i y ( k ) , P i * ( k ) .
Theorem 3.
Let tensor sequences { X i ( k ) } , { Y i ( k ) } ( i = 1 , 2 , 3 , 4 ) be generated by Algorithm . If Algorithm 1 does not break down, then the tensor sequences
{ [ X ( k ) , Y ( k ) ] X ( k ) = X 1 ( k ) + X 2 ( k ) i + X 3 ( k ) j + X 4 ( k ) k , Y ( k ) = Y 1 ( k ) + Y 2 ( k ) i + Y 3 ( k ) j + Y 4 ( k ) k , k = 1 , 2 , }
converge to the solution of Equation (2) within a finite iteration steps in the absence of round-off errors.
Proof of Theorem 3.
We will prove that there exists a k 4 S N such that R i ( k ) = 0 . By contradiction, assume that R i ( k ) 0 , i = 1 , 2 , 3 , 4 , for all k 4 S N , and thus we can compute R i ( 4 S N + 1 ) . Suppose that R i ( 1 ) , R i ( 2 ) , , R i ( 4 S N ) is a dependent sequence, then there exist real numbers λ i , 1 , , λ i , 4 S N , not all zero, such that
λ i , 1 R i ( 1 ) + + λ i , 4 S N R i ( 4 S N ) = 0 ,
for i = 1 , 2 , 3 , 4 . Then
i = 1 4 R i ( l ) , 0 = i = 1 4 R i ( l ) , λ i , 1 R i ( 1 ) + + λ i , 4 S N R i ( 4 S N ) = i = 1 4 λ i , l R i ( l ) , R i ( l ) = 0 ,
which implies i = 1 4 R i ( l ) , R i ( l ) = 0 . This is a contradiction since we can not calculate R i ( 4 S N + 1 ) in this case. Therefore, there must exists a k 4 S N such that R i ( k ) = 0 , that is, the exact solution of tensor Equation (2) can be computed by Algorithm 1 within a finite iteration steps in the absence of round-off errors. □
In the following theorem, we show that if we choose special kinds of initial tensor, then Algorithm 1 can yield the unique minimal Frobenius norm solution of the tensor Equation (2).
Theorem 4.
If we choose the initial tensors as
X j ( 1 ) = Γ j * [ H 1 ( 1 ) , H 2 ( 1 ) , H 3 ( 1 ) , H 4 ( 1 ) ] , Y j ( 1 ) = Φ j * [ H 1 ( 1 ) , H 2 ( 1 ) , H 3 ( 1 ) , H 4 ( 1 ) ] ,
where Γ j * , Φ j * ( j = 1 , 2 , 3 , 4 ) are defined by (10) and (11), H i ( 1 ) R I 1 × I 2 × × I N , ( i = 1 , 2 , 3 , 4 ) are arbitrary tensors (in particular, we take X j ( 1 ) = O , Y j ( 1 ) = O , j = 1 , 2 , 3 , 4 , then the solution group X   j ˜ , Y   j ˜ ( j = 1 , 2 , 3 , 4 ) given by Algorithm 1 is the unique minimal Frobenius norm solution of the tensor Equations (2).
Proof of Theorem 4.
If we choose the initial tensors as (17), it is not difficult to verify that X j ˜ , Y j ˜ ( j = 1 , 2 , 3 , 4 ) obtained by Algorithm 1 have the following form
X j ˜ = Γ j * [ H 1 , H 2 , H 3 , H 4 ] , Y j ˜ = Φ j * [ H 1 , H 2 , H 3 , H 4 ] ,
where tensors H i R I 1 × I 2 × × I N ( i = 1 , 2 , 3 , 4 ) . Now, we show that X ˜ = X 1 ˜ + X 2 ˜ i + X 3 ˜ j + X 4 ˜ k , Y ˜ = Y 1 ˜ + Y 2 ˜ i + Y 3 ˜ j + Y 4 ˜ k is the unique minimal Frobenius norm solution of tensor Equation (2). Let
z ˜ = vec X 1 ˜ vec X 2 ˜ vec X 3 ˜ vec X 4 ˜ vec Y 1 ˜ vec Y 2 ˜ vec Y 3 ˜ vec Y 4 ˜ = Kro L A 1 ( n ) T + Kro L A 2 ( n ) T + Kro L A 3 ( n ) T + Kro L A 4 ( n ) T Kro L A 2 ( n ) T + Kro L A 1 ( n ) T Kro L A 4 ( n ) T + Kro L A 3 ( n ) T Kro L A 3 ( n ) T + Kro L A 4 ( n ) T + Kro L A 1 ( n ) T Kro L A 2 ( n ) T Kro L A 4 ( n ) T Kro L A 3 ( n ) T + Kro L A 2 ( n ) T + Kro L A 1 ( n ) T Kro L B 1 ( n ) T + Kro L B 2 ( n ) T + Kro L B 3 ( n ) T + Kro L B 4 ( n ) T Kro L B 2 ( n ) T + Kro L B 1 ( n ) T Kro L B 4 ( n ) T + Kro L B 3 ( n ) T Kro L B 3 ( n ) T + Kro L B 4 ( n ) T + Kro L B 1 ( n ) T Kro L B 2 ( n ) T Kro L B 4 ( n ) T Kro L B 3 ( n ) T + Kro L B 2 ( n ) T + Kro L B 1 ( n ) T
× vec H 1 ˜ vec H 2 ˜ vec H 3 ˜ vec H 4 ˜ , = Kro L A 1 ( n ) Kro L A 2 ( n ) Kro L A 3 ( n ) Kro L A 4 ( n ) Kro L A 2 ( n ) Kro L A 1 ( n ) Kro L A 4 ( n ) Kro L A 3 ( n ) Kro L A 3 ( n ) Kro L A 4 ( n ) Kro L A 1 ( n ) Kro L A 2 ( n ) Kro L A 4 ( n ) Kro L A 3 ( n ) Kro L A 2 ( n ) Kro L A 1 ( n ) Kro L B 1 ( n ) Kro L B 2 ( n ) Kro L B 3 ( n ) Kro L B 4 ( n ) Kro L B 2 ( n ) Kro L B 1 ( n ) Kro L B 4 ( n ) Kro L B 3 ( n ) Kro L B 3 ( n ) Kro L B 4 ( n ) Kro L B 1 ( n ) Kro L B 2 ( n ) Kro L B 4 ( n ) Kro L B 3 ( n ) Kro L B 2 ( n ) Kro L B 1 ( n ) T × vec H 1 ˜ vec H 2 ˜ vec H 3 ˜ vec H 4 ˜ , = [ M A , M B ] T × vec H 1 ˜ vec H 2 ˜ vec H 3 ˜ vec H 4 ˜ R ( [ M A , M B ] T ) .
According to Theorem 1, we can conclude that X ˜ = X 1 ˜ + X 2 ˜ i + X 3 ˜ j + X 4 ˜ k , Y ˜ = Y 1 ˜ + Y 2 ˜ i + Y 3 ˜ j + Y 4 ˜ k generated by Algorithm 1 is the unique minimal Frobenius norm solution of the tensor Equations (2). □
Now, we solve Problem 1.2. If the tensor Equation (2) is consistent, the solution pair set S X Y of Problem 1.1 is non-empty, for given tensors X ¯ , Y ¯ H I 1 × I 2 × × I N , we have
min X , Y H 1 × I 2 × × I N k = 1 N X × k A ( k ) + Y × k B ( k ) C = min X , Y H I 1 × I 2 × × I N k = 1 N X X ¯ × k A ( k ) + Y Y ¯ × k B ( k ) C k = 1 N X ¯ × k A ( k ) Y ¯ × k B ( k ) .
Let X ˜ = X X ¯ , Y ˜ = Y Y ¯ , and C ˜ = C k = 1 N X ¯ × k A ( k ) Y ¯ × k B ( k ) , then the tensor nearness Problem 1.2 is equivalent to first finding the minimal Frobenius norm sulution of tensor equation
k = 1 N X ˜ × k A ( k ) + Y ˜ × k B ( k ) = C .
By using Algorithm 1, and letting the initial tensors be X ˜ = X 1 ˜ + X 2 ˜ i + X 3 ˜ j + X 4 ˜ k , Y ˜ = Y 1 ˜ + Y 2 ˜ i + Y 3 ˜ j + Y 4 ˜ k , where X j ( 1 ) = Γ j * [ H 1 ( 1 ) , H 2 ( 1 ) , H 3 ( 1 ) , H 4 ( 1 ) ] , Y j ( 1 ) = Φ j * [ H 1 ( 1 ) , H 2 ( 1 ) , H 3 ( 1 ) , H 4 ( 1 ) ] ( j = 1 , 2 , 3 , 4 ) , where H i ( 1 ) R I 1 × I 2 × × I N ( i = 1 , 2 , 3 , 4 ) are arbitrary tensors (in particular, we take X j ( 1 ) = O , Y j ( 1 ) = O , j = 1 , 2 , 3 , 4 , we can obtain the unique minimal Frobenius norm solution X ˜ * , Y ˜ * of Equation (18). Once the above tensor X ˜ * , Y ˜ * are achieved, the unique solution X ˇ , Y ˇ of Problem 1.2 can be computed. In this case, X ˇ , Y ˇ can be expressed as X ˇ = X ˜ * + X ¯ , Y ˇ = Y ˜ * + Y ¯ .

4. Numerical Examples

In this section, we will give some numerical examples to support the efficiency and applications of Algorithm 1. The codes in our computation are written in Matlab R2018a with 2.3 GHz central processing unit (Intel(R) Core(TM) i5), 8GB memory. Moreover, we implemented all the operations based on tensor toolbox (version 3.2.1) proposed by Bader and Kolda [27]. For all of the examples, the iterations begin with the initial values X i = Y i = 0 , i = 1 , 2 , 3 , 4 in Algorithm 1, and the stopping criteria is Res 10 5 or the number of iteration steps exceeding 2000. We describe some notations that appear in the following examples in Table 1.
Example 1.
We consider the tensor Equation (2) with:
A ( 1 ) = 2 + 4 i + 4 j + 5 k 1 i + j + 2 k 2 i 2 j + k 2 + j 3 + 2 i + 5 j + k 1 2 j + k 2 + 2 i + j 1 2 i + 2 j k 3 + 4 i + 4 j + k , A ( 2 ) = 3 + 4 j 1 + j 1 2 j 2 + j 4 + 3 j j j 1 2 + j , A ( 3 ) = 2 i + 3 j i 1 + j 2 i j 5 i + 3 j i 2 i j i j 5 i + 4 j , B ( 1 ) = 3 + 3 i 2 + i 2 i 2 + i 4 + 4 i 2 2 i 2 + 2 i 2 6 + 4 i , B ( 2 ) = 3 i + 5 j 2 i 2 i + 2 j 0 2 i + 3 j j 2 i + 2 j 2 i 2 j 6 i + 6 j , B ( 3 ) = 1 + 3 i + 3 k 1 2 i i + 2 k i 2 + 4 i + 2 k 2 + 2 i 2 + i + 2 k 2 + i 2 k 5 + 4 i + 6 k , C ( : , : , 1 ) = 3 11 i + 31 j + 19 k 6 8 i + j + 28 k 2 6 i + 20 j + 24 k 4 + 36 j + 8 k 7 + 3 i + 27 j + 17 k 3 + 5 i + 25 j + 13 k 24 2 i + 52 j + 22 k 27 + i + 43 j + 31 k 23 + 3 i + 41 j + 27 k , C ( : , : , 2 ) = 11 13 i + 31 j + 13 k 14 10 i + 22 j + 22 k 10 8 i + 20 j + 18 k 12 2 i + 36 j + 2 k 15 + i + 27 j + 11 k 11 + 3 i + 25 j + 7 k 32 4 i + 52 j + 16 k 35 i + 43 j + 25 k 31 + i + 41 j + 21 k ,
C ( : , : , 3 ) = 9 9 i + 45 j + 25 k 12 6 i + 36 j + 34 k 8 4 i + 34 j + 30 k 10 + 2 i + 50 j + 14 k 13 + 5 i + 41 j + 23 k 9 + 7 i + 39 j + 19 k 30 + 66 j + 28 k 33 + 3 i + 57 j + 3 k 29 + 5 i + 55 j + 33 k .
Applying for Algorithm 1, the IT is 46, the costed CPU time is 3.9496s and the Res is 1.2885e-06. Besides that, the Figure 1 reported here illustrates Algorithm 1 is feasible.
Example 2
(Test matrices from [1]). In this example, we consider the quaternion tensor Equation (2) such that
A ( m ) = A m 1 + A m 2 i + A m 3 j + A m 4 k , B ( m ) = B m 1 + B m 2 i + B m 3 j + B m 4 k , m = 1 , 2 , 3 ,
where
A 11 = t r i u ( h i l b ( n ) ) , A 12 = t r i u ( o n e s ( n , n ) ) , A 13 = e y e ( n ) , A 14 = o n e s ( n ) , A 21 = z e r o s ( n ) , A 22 = z e r o s ( n ) , A 23 = t r i d i a g ( 1 , 2 , 1 , n ) , A 24 = z e r o s ( n ) , A 31 = z e r o s ( n ) , A 32 = t r i d i a g ( 0.5 , 6 , 0.5 , n ) , A 33 = e y e ( n ) , A 34 = z e r o s ( n ) ,
B 11 = e y e ( n ) , B 12 = o n e s ( n ) , B 13 = z e r o s ( n ) , B 14 = z e r o s ( n ) , B 21 = z e r o s ( n ) , B 22 = t r i d i a g ( 0.5 , 6 , 0.5 , n ) , B 23 = e y e ( n ) , B 24 = z e r o s ( n ) , B 31 = t r i d i a g ( 0.5 , 6 , 0.5 , n ) , B 32 = e y e ( n ) , B 33 = z e r o s ( n ) , B 34 = o n e s ( n ) .
C = t e n r a n d ( n , n , n ) + t e n r a n d ( n , n , n ) i + t e n r a n d ( n , n , n ) j + t e n r a n d ( n , n , n ) k .
Choosing the initial tensor X = Y = 0 , we depict the convergence curves of Algorithm 1 for different n in Figure 2. For n = 20 , n = 40 and n = 60 , we list the costed CPU time, residual norms after finite steps and the relative errors of approximate solutions by Algorithm 1 in Table 2.
Example 3.
We consider the solution of the following convection-diffusion equation over the quaternion algebra [4]
v Δ u + c T u = f in Γ = [ 0 , 1 ] × [ 0 , 1 ] × [ 0 , 1 ] u = 0 on Γ .
Based on a standard finite difference discretization on a uniform grid for the diffusion term and a second-order convergent scheme (Fromm’s scheme) for the convection term with the mesh-size h = 1 p + 1 , we solve the quaternion tensor Equation (3) with
A ( n ) = A 1 ( n ) + A 2 ( n ) i + A 3 ( n ) j + A 4 ( n ) k , n = 1 , 2 , 3 ,
where
A i ( n ) = v i ( n ) h 2 2 1 1 2 1 1 2 1 1 2 p × p + c i ( n ) 4 h 3 5 1 1 3 5 1 1 3 5 1 3 p × p , i = 1 , 2 , 3 , 4 .
The right hand side tensor C is contructed such that the exact solution of Equation (3) is X * = X 1 * + X 2 * i + X 3 * j + X 4 * k = t e n o n e s ( p , p , p ) + t e n o n e s ( p , p , p ) i + t e n o n e s ( p , p , p ) j + t e n o n e s ( p , p , p ) k .
We consider two cases in order to compare Algorithm 1 with the CGLS algorithm in [49]. In case I, we choose different v i ( n ) and c i ( n ) to obtain the results. In Table 3, we set
v i ( 1 ) = 1 , c i ( 1 ) = 1 ; v i ( 2 ) = 1 , c i ( 2 ) = 2 ; v i ( 3 ) = 1 , c i ( 2 ) = 3 , i = 1 , 2 , 3 , 4
to get A ( n ) ( n = 1 , 2 , 3 ) with same real part and imaginary part. In Table 4, we set
v i ( 1 ) = 1 , c 1 ( 1 ) = 1 , c 2 ( 1 ) = 1 , c 3 ( 1 ) = 0 , c 4 ( 1 ) = 1 ; v i ( 2 ) = 0.1 , c 1 ( 2 ) = 1 , c 2 ( 2 ) = 1 , c 3 ( 2 ) = 1 , c 4 ( 2 ) = 0 ; v i ( 3 ) = 0.01 , c 1 ( 3 ) = 1 , c 2 ( 3 ) = 1 , c 3 ( 3 ) = 0 , c 4 ( 3 ) = 0 , i = 1 , 2 , 3 , 4
to get A ( n ) ( n = 1 , 2 , 3 ) with different real parts and imaginary parts.
In case II, we set c i ( 1 ) = 1 ; c i ( 2 ) = 2 ; c i ( 2 ) = 3 , i = 1 , 2 , 3 , 4 , we apply Algorithm 1 and the CGLS algorithm in [49] with v i ( j ) = 1 e 03 , 1 , 100 , i = 1 , 2 , 3 , 4 , j = 1 , 2 , 3 for grid 10 × 10 × 10 . The corresponding relative errors of approximate solution i = 1 i = 4 X i ( k ) X i * / X i * computed by these methods are presented in Table 5.
The previous results show that Algorithm 1 has faster convergent rates than the CGLS algorithm in [49] as p increases.
Example 4.
In this example, we employ Algorithm 1 to compare its performance with the CGLS algorithm in restoring a color video comprised of a sequence of RGB images (slices). The video, named `rhinos’, is sourced from Matlab and stored in AVI format. Each frontal slice of this color video is depicted by a pure quaternion matrix measuring 240 × 320 pixels. For C = C ^ + N = X × 1 A , we consider X to represent the orginal colour video, A as the blurred matrix, and N as a noise tensor. When N = 0 , C is referred to as the blurred and noise-free color video. In this scenario, we select a blurred matrix A = A 1 A 2 R 240 × 320 , where A 1 = a i j ( 1 ) 1 i , j 16 and A 2 = a i j ( 2 ) 1 i 15 , 1 j 20 are Toeplitz matrices with entries defined as:
a i j ( 1 ) = 1 σ 2 π exp ( i j ) 2 2 σ 2 , | i j | r , 1 , otherwise . ; a i j ( 2 ) = 1 2 s 1 , | i j | s , 0 , otherwise .
We denote X restored as the resulting restored color video. The algorithm’s performance is assessed using the peak signal-to-noise ratio (PSNR) measured in decibels (dB):
PSNR ( X ) = 10 log 10 I 1 I 2 d 2 X X restored 2 ,
where d represents the maximum possible pixel value of the image. RE ( X ) represents the relative error defined as
RE ( X ) = X X restored X .
Here, we set d = 255 and the variance σ = 1 . The peak signal-to-noise ratio (PSNR) and the relative error (RE) of Algorithm 1 and the CGLS algorithm with different parameters are presented in Table 6. As indicated, the PSNR and the relative error of our algorithms are significantly superior to those of CGLS. For the case where r = 6 and s = 6 with slice No. 7 of the color video, we illustrate the original image, blurred image, and the restored image by CGLS and Algorithm 1 in Figure 3. This figure demonstrates that our algorithm can effectively restore blurred and noise-free color video with high quality.

5. Conclusions

The main goal of this paper is to solve the generalized quaternion tensor Equation (2). We herewith develop a BiCG iterative algorithm based on the tensor format to solve Equation (2) efficiently, and we also prove the convergence of our proposed method. Moreover, we demonstrate that the solution with minimal Frobenius norm can be achieved by initializing specific types of tensors. We present several examples to effectively demonstrate the efficiency of our algorithm. Furthermore, our algorithm is successfully applied to the restoration of color videos. This contribution significantly advances the current understanding of quaternion tensor equations by introducing a practical iterative approach.

Author Contributions

All authors have equal contributions in conceptualization, formal analysis, investigation, methodology, software, validation, writing an original draft, writing a review, and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research gratelfully acknowledges financial support from National Natural Science Foundation of China.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors would like to thank the editor and reviewers for their valuable suggestions, comments, and the Natural Science Foundation of China under Grant Nos. # 12371023 , # 12301028 , the Canada NSERC under Grant No. RGPIN-2020-06746.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ahmadi-Asl, S.; Beik, F.P.A. An efficient iterative algorithm for quaternionic least-squares problems over the generalized η-(anti-)bi-Hermitian matrices. Linear Multilinear Algebra 2017, 65, 1743–1769. [Google Scholar] [CrossRef]
  2. Ahmadi-Asl, S.; Beik, F.P.A. Iterative algorithms for least-squares solutions of a quaternion matrix equation. J. Appl. Math. Comput. 2017, 53, 95–127. [Google Scholar] [CrossRef]
  3. Bai, Z.-Z.; Golub, G.H.; Ng, M.K. Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J. Matrix Anal. Appl. 2003, 24, 603–626. [Google Scholar] [CrossRef]
  4. Ballani, J.; Grasedyck, L. A projection method to solve linear systems in tensor format. Numer. Linear Algebra Appl. 2013, 20, 27–43. [Google Scholar] [CrossRef]
  5. Bank, R.E.; Chan, T.-F. An analysis of the composite step biconjugate gradient method. Numer. Math. 1993, 66, 295–319. [Google Scholar] [CrossRef]
  6. Bank, R.E.; Chan, T.-F. A composite step bi-conjugate gradient algorithm for nonsymmetric linear systems. Numer. Algorithms 1994, 7, 1–16. [Google Scholar] [CrossRef]
  7. Beik, F.; Ahmadi-Asl, S. An iterative algorithmfor η-(anti)-Hermitian least-squares solutions of quaternion matrix equations. Electron. J. Linear Algebra 2015, 30, 372–401. [Google Scholar] [CrossRef]
  8. Beik, F.; Panjeh, F.; Movahed, F.; Ahmadi‐Asl, S. On the Krylov subspace methods based on tensor format for positive definite Sylvester tensor equations. Numer. Linear Algebra Appl. 2016, 23, 444–466. [Google Scholar] [CrossRef]
  9. Beik, F.; Jbilou, K.; Najafi-Kalyani, M.; et al. Golub–Kahan bidiagonalization for ill-conditioned tensor equations with applications. Numer. Algorithms 2020, 84, 1535–1563. [Google Scholar] [CrossRef]
  10. Chen, Z.; Lu, L. A projection method and Kronecker product preconditioner for solving Sylvester tensor equations. Sci. China Math. 2012, 55, 1281–1292. [Google Scholar] [CrossRef]
  11. Duan, X.F.; Zhang, Y.S.; Wang, Q.W. An efficient iterative method for solving a class of constrained tensor least squares problem. Appl. Numer. Math. 2024, 196, 104–117. [Google Scholar] [CrossRef]
  12. Freund, R.W.; Golub, G.H.; Nachtigal, N.M. Iterative solution of linear systems. Acta Numer. 1992, 1, 44. [Google Scholar] [CrossRef]
  13. Gao, Z.-H.; Wang, Q.-W.; Xie, L. The (anti-)η-Hermitian solution to a novel system of matrix equations over the split quaternion algebra. Math. Meth. Appl. Sci. 2024, 1–18. [Google Scholar] [CrossRef]
  14. Grasedyck, L. Existence and computation of low Kronecker-rank approximations for large linear systems of tensor product structure. Computing 2004, 72, 247–265. [Google Scholar] [CrossRef]
  15. Guan, Y.; Chu, D. Numerical computation for orthogonal low-rank approximation of tensors. SIAM J. Matrix Anal. Appl. 2019, 40, 1047–1065. [Google Scholar] [CrossRef]
  16. Guan, Y.; Chu, M.T.; Chu, D. Convergence analysis of an SVD-based algorithm for the best rank-1 tensor approximation. Linear Algebra Appl. 2018, 555, 53–69. [Google Scholar] [CrossRef]
  17. Guan, Y.; Chu, M.T.; Chu, D. SVD-based algorithms for the best rank-1 approximation of a symmetric tensor. SIAM J. Matrix Anal. 2018, 39, 1095–1115. [Google Scholar] [CrossRef]
  18. Hajarian, M. Developing Bi-CG and Bi-CR methods to solve generalized Sylvester-transpose matrix equations. Int. J. Auto. Comput. 2014, 11(1), 25–29. [Google Scholar] [CrossRef]
  19. He, Z.-H.; Wang, X.-X.; Zhao, Y.-F. Eigenvalues of quaternion tensors with applications to color video processing. J. Sci. Comput. 2023, 94, 1. [Google Scholar] [CrossRef]
  20. Heyouni, M.; Saberi-Movahed, F.; Tajaddini, A. On global Hessenberg based methods for solving Sylvester matrix equations. Comput. Math. Appl. 2019, 77, 77–92. [Google Scholar] [CrossRef]
  21. Hu, J.; Ke, Y.; Ma, C. Efficient iterative method for generalized Sylvester quaternion tensor equation. Comput. Appl. Math. 2023, 42, 237. [Google Scholar] [CrossRef]
  22. Huang, N.; Ma, C.-F. Modified conjugate gradient method for obtaining the minimum-norm solution of the generalized coupled Sylvester-conjugate matrix equations. Appl. Math. Model. 2016, 40, 1260–1275. [Google Scholar] [CrossRef]
  23. Jia, Z.; Wei, M.; Zhao, M.-X.; et al. A new real structure-preserving quaternion QR algorithm. J. Comput. Appl. Math. 2018, 343, 26–48. [Google Scholar] [CrossRef]
  24. Karimi, S.; Dehghan, M. Global least squares method based on tensor form to solve linear systems in Kronecker format. Trans. Inst. Measure. Control 2018, 40, 2378–2386. [Google Scholar] [CrossRef]
  25. Ke, Y. Finite iterative algorithm for the complex generalized Sylvester tensor equations. J. Appl. Anal. Comput. 2020, 10, 972–985. [Google Scholar] [CrossRef] [PubMed]
  26. Kolda, T.G. Multilinear operators for higher-order decompositions; Sandia National Laboratory(SNL): Albuquerque, NM, and Livermore, CA, 2006. [Google Scholar]
  27. Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  28. Kyrchei, I. Cramer’s rules for Sylvester quaternion matrix equation and its special cases. Adv. Appl. Clifford Algebras 2018, 28, 1–26. [Google Scholar] [CrossRef]
  29. Li, B.-W.; Tian, S.; Sun, Y.-S.; et al. Schur-decomposition for 3D matrix equations and its application in solving radiative discrete ordinates equations discretized by Chebyshev collocation spectral method. J. Comput. Phys. 2010, 229, 1198–1212. [Google Scholar] [CrossRef]
  30. Li, T.; Wang, Q.-W.; Duan, X.-F. Numerical algorithms for solving discrete Lyapunov tensor equation. J. Comput. Appl. Math. 2020, 370, 112676. [Google Scholar] [CrossRef]
  31. Li, T.; Wang, Q.-W.; Zhang, X.-F. Gradient based iterative methods for solving symmetric tensor equations. Numer. Linear Algebra Appl. 2022, 29. [Google Scholar] [CrossRef]
  32. Li, T.; Wang, Q.-W.; Zhang, X.-F. A Modified conjugate residual method and nearest kronecker product preconditioner for the generalized coupled Sylvester tensor equations. Mathematics 2022, 10, 1730. [Google Scholar] [CrossRef]
  33. Li, X.; Ng, M.K. Solving sparse non-negative tensor equations: Algorithms and applications. Front. Math. China 2015, 10, 649–680. [Google Scholar] [CrossRef]
  34. Li, Y.; Wei, M.; Zhang, F.; et al. Real structure-preserving algorithms of Householder based transformations for quaternion matrices. J. Comput. Appl. Math. 2016, 305, 82–91. [Google Scholar] [CrossRef]
  35. Liang, Y.; Silva, S.D.; Zhang, Y. The tensor rank problem over the quaternions. Linear Algebra Appl. 2021, 620, 37–60. [Google Scholar] [CrossRef]
  36. Lv, C.; Ma, C. A modified CG algorithm for solving generalized coupled Sylvester tensor equations. Appl. Math. Comput. 2020, 365, 124699. [Google Scholar] [CrossRef]
  37. Malek, A.; Momeni-Masuleh, S.H. A mixed collocation–finite difference method for 3D microscopic heat transport problems. J. Comput. Appl. Math. 2008, 217, 137–147. [Google Scholar] [CrossRef]
  38. Mehany, M.S.; Wang, Q.-W.; Liu, L. A System of Sylvester-like quaternion tensor equations with an application. Front. Math. 2024, 1–20. [Google Scholar] [CrossRef]
  39. Najafi-Kalyani, M.; Beik, F.P.A.; Jbilou, K. On global iterative schemes based on Hessenberg process for (ill-posed) Sylvester tensor equations. J. Comput. Appl. Math. 2020, 373, 112216. [Google Scholar] [CrossRef]
  40. Peng, Y.; Hu, X.; Zhang, L. An iteration method for the symmetric solutions and the optimal approximation solution of the matrix equation AXB=C. Appl. Math. Comput. 2005, 160, 763–777. [Google Scholar] [CrossRef]
  41. Qi, L. Eigenvalues of a real supersymmetric tensor. J. Symbolic Comput. 2005, 40, 1302–1324. [Google Scholar] [CrossRef]
  42. Qi, L. Symmetric nonnegative tensors and copositive tensors. Linear Algebra Appl. 2013, 439, 228–238. [Google Scholar] [CrossRef]
  43. Qi, L.; Chen, H.; Chen, Y. Tensor eigenvalues and their applications; J. Symbolic Comput., Springer, 2018.
  44. Qi, L.; Luo, Z. Tensor analysis: Spectral theory and special tensors; SIAM: Philadelphia, 2017. [Google Scholar]
  45. Saberi-Movahed, F.; Tajaddini, A.; Heyouni, M.; et al. Some iterative approaches for Sylvester tensor equations, Part I: A tensor format of truncated Loose Simpler GMRES. Appl. Numer. Math. 2022, 172, 428–445. [Google Scholar] [CrossRef]
  46. Saberi-Movahed, F.; Tajaddini, A.; Heyouni, M.; et al. Some iterative approaches for Sylvester tensor equations, Part II: A tensor format of Simpler variant of GCRO-based methods. Appl. Numer. Math. 2022, 172, 413–427. [Google Scholar] [CrossRef]
  47. Song, G.; Wang, Q.-W.; Yu, S. Cramer’s rule for a system of quaternion matrix equations with applications. Applied Math. Comput. 2018, 336, 490–499. [Google Scholar] [CrossRef]
  48. Wang, Q.-W.; He, Z.-H.; Zhang, Y. Constrained two-sided coupled Sylvester-type quaternion matrix equations. Automatica 2019, 101, 207–213. [Google Scholar] [CrossRef]
  49. Wang, Q.-W.; Xu, X.; Duan, X. Least squares solution of the quaternion Sylvester tensor equation. Linear Multilinear Algebra 2021, 69, 104–130. [Google Scholar] [CrossRef]
  50. Xie, M.; Wang, Q.-W. Reducible solution to a quaternion tensor equation. Front. Math. China 2020, 15, 1047–1070. [Google Scholar] [CrossRef]
  51. Xie, M.; Wang, Q.-W.; He, Z.-H.; et al. A system of Sylvester-type quaternion matrix equations with ten variables. Acta Math. Sin. (Engl. Ser.) 2022, 38, 1399C–1420. [Google Scholar] [CrossRef]
  52. Zhang, F.; Wei, M.; Li, Y.; et al. Special least squares solutions of the quaternion matrix equation AXB+CXD=E. Comput. Math. Appl. 2016, 72, 1426–1435. [Google Scholar] [CrossRef]
  53. Zhang, X. A system of generalized Sylvester quaternion matrix equations and its applications. Appl. Math. Comput. 2016, 273, 74–81. [Google Scholar] [CrossRef]
  54. Zhang, X.-F.; Li, T.; Ou, Y.-G. Iterative solutions of generalized Sylvester quaternion tensor equations. Linear Multilinear Algebra 2024, 72, 1259–1278. [Google Scholar] [CrossRef]
  55. Zhang, X.-F.; Wang, Q.-W. Developing iterative algorithms to solve Sylvester tensor equations. Appl. Math. Comput. 2021, 409, 126403. [Google Scholar] [CrossRef]
  56. Zhang, X.-F.; Wang, Q.-W. On RGI Algorithms for Solving Sylvester Tensor Equations. Taiwan. J. Math. 2022, 1, 1–19. [Google Scholar] [CrossRef]
Figure 1. Convergence history of Example 1.
Figure 1. Convergence history of Example 1.
Preprints 114338 g001
Figure 2. Convergence history of Example 2.
Figure 2. Convergence history of Example 2.
Preprints 114338 g002
Figure 3. The restored colour video for the case r = 6 , s = 6 with slice No. 7.
Figure 3. The restored colour video for the case r = 6 , s = 6 with slice No. 7.
Preprints 114338 g003
Table 1. Some denotations in numerical examples.
Table 1. Some denotations in numerical examples.
IT The number of iteration steps
CPU time The Elapsed CPU time in seconds
Res i = 1 4 R i ( k ) , R i ( k ) is the residual at kth iteration.
t e n r a n d ( n , n , n ) The order three tensor with pseudo-random values drawn from a uniform
distribution on the unit interval
t r i u ( h i l b ( n ) ) The upper triangular portion of Hilbert matrix
t r i u ( o n e s ( n , n ) ) The upper triangular portion of matrix with all 1
e y e ( n ) Identity matrix
z e r o s ( n ) Zero matrix
t r i d i a g ( a , b , c , n ) The tridiagonal matrix with a , b , c
Table 2. Numerical results for Example 2.
Table 2. Numerical results for Example 2.
IT CPU time Res
n=20 82 8.3272 7.7182e-06
n=40 156 24.8187 5.1233e-06
n=60 288 91.4387 7.6898e-06
Table 3. CPU time (IT) for Example 3 with parameter setup in (19).
Table 3. CPU time (IT) for Example 3 with parameter setup in (19).
p=10 p=25 p=30
Algorithm 1 25.6884(320) 140.8233(1270) 202.7709(1580)
CGLS [49] 15.5685(219) 144.3166(1266) 230.7340(1832)
Table 4. CPU time (IT) for Example 3 with parameter setup in (20).
Table 4. CPU time (IT) for Example 3 with parameter setup in (20).
p=10 p=15 p=20
Algorithm 45.4157(576) 104.4370(1095) 163.6110(1700)
CGLS [49] 44.0701(579) 118.2419(1296) 230.7978(2298)
Table 5. The relative errors of the solution (IT) for Example 3.
Table 5. The relative errors of the solution (IT) for Example 3.
v i ( 1 ) = 1 e 03 v i ( 2 ) = 1 v i ( 3 ) = 100
Algorithm 1 5.6210e-09(248) 6.0527e-09(188) 4.1925e-09(248)
CGLS [49] 9.9300e-09(106) 9.7035e-09(178) 8.9552e-09(167)
Table 6. The numerical results for Example 4.
Table 6. The numerical results for Example 4.
Algorithm([r,s]) Algorithm (PSNR/RE) CGLS(PSNR/RE)
[3,3] 38.6181(0.0235) 13.8404(0.3769)
[6,6] 37.3721(0.0300) 14.0529(0.3694)
[8,8] 33.8073(0.0338) 14.7958(0.3551)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated