Preprint
Article

Some Matrix-variate Models Applicable in Different Areas

Altmetrics

Downloads

77

Views

19

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

11 September 2023

Posted:

12 September 2023

You are already at the latest version

Alerts
Abstract
Matrix-variate Gaussian type or Wishart type distributions in the real domain are widely used in the literature. When the exponential trace has an arbitrary power and when a factor involving a determinant enters into the model or a matrix-variate gamma type or Wishart type model with exponential trace having an arbitrary power, is extremely difficult to handle. Evaluation of the normalizing constant in such a model is the most important part because when studying the properties of such a model, the method used in the evaluation of the normalizing constant will be the relevant steps in all the computations involved. One such model with a factor involving a trace and the exponential trace having an arbitrary power, in the real domain, is known in the literature as Kotz' model. No explicit evaluation of the normalizing constant in the model involving trace with an exponent and determinant with an exponent entering into the model and at the same time the exponential trace having an arbitrary exponent seems to be available in the literature. The normalizing constant widely used in the literature and interpreted as the normalizing constant in the general model and refers to as a Kotz' model does not seem to be correct. Corresponding model in the complex domain, with the correct normalizing constant, does not seem to be available in the literature. One of the main contributions in this paper is the matrix-variate distributions in the complex domain belonging to Gaussian type, gamma type, type-1 and type-2 beta type when the exponential trace has an arbitrary power. All these models are believed to be new. A second main contribution is the explicit evaluation of the the normalizing constants, in the real and complex domains especially in the complex domain, in a matrix-variate model involving a determinant and a trace as multiplicative factors and at the same time the exponential trace having an arbitrary power. Another main contribution is the introduction of matrix-variate models with exponential trace having an arbitrary exponent, in the categories of type-1 beta, type-2 beta and gamma distributions or in the family of Mathai's pathway models [1], both in the real and complex domains. Another new contribution is the logistic-based extensions of models in the real and complex domains with exponential trace having an arbitrary exponent and connecting to extended zeta functions introduced by this author recently. Some properties of such models are indicated but not derived in detail in order to limit the size of the paper. The techniques and steps used at various stages in this paper will be highly useful for people working in multivariate statistical analysis as well as people applying such models in engineering problems, communication theory, quantum physics and related areas, apart from statistical applications.
Keywords: 
Subject: Computer Science and Mathematics  -   Mathematics

1. Introduction

The following notations will be used in this paper: Real scalar variables, whether mathematical or random, will be denoted by lower-case letters such as x , y . Real vector/matrix variables, whether mathematical or random, whether square matrices or rectangular matrices, will be denoted by capital letters such as X , Y . Scalar constants will be denoted by lower-case letters such as a , b and vector/matrix constants will be denoted by A , B etc. A tilde will be used to designate variables in the complex domain such as x ˜ , y ˜ , X ˜ , Y ˜ . No tilde will be used on constants. When Greek letters and other symbols appear, the notations will be explained then and there. Let X = ( x i j ) be a p × q matrix where the elements are functionally independent or distinct real scalar variables. Then, the wedge product of differentials will be defined as d X = i = 1 p j = 1 q d x i j . When x and y are real scalars, then the wedge product of their differentials is defined as d x d y = d y d x so that d x d x = 0 , d y d y = 0 . For a square matrix A the determinant will be denoted as | A | or as det ( A ) . When A is in the complex domain, then the absolute value of the determinant or modulus of the determinant will be denoted as | det ( A ) | . If | A | = a + i b , i = ( 1 ) , a , b real scalars, then | det ( A ) | = ( a 2 + b 2 ) . If X ˜ is in the complex domain, then one can write X ˜ = X 1 + i X 2 , i = ( 1 ) , X 1 , X 2 real, then the wedge product of differentials in X ˜ will be defined as d X ˜ = d X 1 d X 2 . We will consider only real-valued scalar functions in this paper. X f ( X ) d X will denote integral over X of the real-valued scalar function f ( X ) of X. When f ( X ) is a real-valued scalar function of X, whether X is scalar or vector or matrix in the real or complex domain, and if f ( X ) 0 for all X and X f ( X ) d X = 1 , then f ( X ) will be defined as a density or statistical density. When a square matrix X is positive definite then it will be denoted as X > O where X = X , a prime denoting the transpose. Conjugate transpose of any matrix Y ˜ in the complex domain will be written as Y ˜ * . When a square matrix X ˜ is in the complex domain and if X ˜ = X ˜ * then X ˜ is Hermitian. If X ˜ = X ˜ * > O then X ˜ is called Hermitian positive definite. When Y > O , then A B f ( X ) d X means the integral of the real-valued scalar function f ( X ) over the real positive definite matrix X such that X > O , A > O , B > O , X A > O , B X > O (all positive definite), where A > O and B > O are constant matrices, and similar notation and interpretation in the complex domain also. In order to avoid multiplicity of numbers, the following procedure will be used. For a function number or equation number in the complex domain, corresponding to the same in the real domain, a letter c will be affixed to the function number and section number of the equation number. For example f 1 c ( X ˜ ) will correspond to f 1 ( X ) in the real domain and equation number ( 2 c . 5 ) will correspond to ( 2.5 ) in the real domain. This notation will enable a reader to recognize a function or equation in the complex domain instantly by recognizing the subscript c. Other notations will be explained whenever they occur for the first time.
Matrix-variate statistical distributions are widely used in all types of disciplines such as Statistics, Physics, Communication Theory, Engineering problems. A matrix-variate density where a trace with an exponent enters into the density as a product and when the exponential trace has an arbitrary power, known as Kotz’ model in the literature, is widely used in the analysis of data coming from various areas such as multi-look return signals in radar and sonar, see for example, [2] regarding the analysis of PolSAR (Polarimetric Synthetic Aperture Radar) data. Kotz’ model is a generalization of the basic matrix-variate Gaussian model or it can also be considered as a generalization of matrix-variate gamma model or Wishart model. When analysing radar data, it is found that Gaussian-based models fit well when the surface is disturbance-free. It is found that Gaussian-based models are not appropriate in certain regions such as urban area, sea surface, forests etc, see for example, [3–6]. Hence, we will consider some non-Gaussian or non-Wishart models also in this paper, along with Gaussian-based models. In most of the applications in engineering areas, each scalar variable has two components, such as time and phase, so that a complex variable is very appropriate to represent such a scalar variable. Hence, it is found that distributions in the complex domain are more important in applications in physical sciences and engineering areas. When a statistical density is used in any applied problem, computation of the normalizing constant there is the most important step because when studying all sorts of properties of such a model, the computations naturally follow the format of the evaluation of the normalizing constant in that model. Explicit evaluation of the normalizing constant in a general model, often referred to also as a Kotz’ model in the real domain, does not seem to be available in the literature. Normalizing constant in the general model in the real domain appearing in [7], which the authors claim to have been available elsewhere in earlier literature, seems to be the one widely used in all the applications where Kotz’ model in the real domain is used. But, unfortunately the normalizing constant quoted in [7] does not seem to be correct. Kotz’ type model in the complex domain does not seem to be available in the literature and the normalizing constant therein does not seem to be available also. Hence, one of the aims of this paper is to give the derivation of the normalizing constant in the general model in detail, in the real and complex domains, and also to extend the ideas to Mathai’s pathway family [1], namely matrix-variate gamma, type-1 beta and type-2 beta families of densities. Since the derivation of the normalizing constant is the most important step in the construction of any statistical model, various matrix-variate models are listed in this paper by showing the computations of the normalizing constants in each case. Some applications Kotz’ model in the real domain may be seen from [8–11].
This paper is organized as follows: Section 1 contains the introductory material. Section 2 gives explicit evaluation of the normalizing constant in an extended matrix-variate gamma type or Gaussian type or Wishart type or Kotz type model, both in the real and complex domains, and then deals with multivariate and matrix-variate extended Gaussian and gamma type distributions. Section 3 examines extended matrix-variate type-2 beta type models in the real and complex domains. Section 4 contains extended matrix-variate models of the type-1 beta type in the real and complex domains. Throughout the paper, the results in the complex domain are listed side by side with the corresponding results in the real domain. Detailed derivations are done for the real domain cases only, since most of the steps in the complex domain are parallel to those in the real domain.

2. Evaluation of Some Matrix-variate Integrals and the Resulting Models

Let us start with an example of the evaluation of an integral in the real domain which will show the different types of hurdles to overcome to achieve the final result. Let X = ( x i j ) be a p × q , p q matrix of rank p where the p q elements x i j ’s are functionally independent (distinct) real scalar variables. Suppose that we wish to evaluate the following integral, where f ( X ) is a real-valued scalar function of the p × q matrix X, integral over X and the wedge product of differentials d X are already explained in Section 1:
X f ( X ) d X = c X | A 1 2 ( X M ) B ( X M ) A 1 2 | γ [ tr ( A 1 2 ( X M ) B ( X M ) A 1 2 ) ] η × e α [ tr ( A 1 2 ( X M ) B ( X M ) A 1 2 ) ] δ d X
where δ > 0 , M = E [ X ] , ( η ) > 0 , ( γ ) > q 2 + p 1 2 , A > O is p × p and B > O is q × q positive definite constant matrices, A 1 2 is the positive definite square root of the positive definite matrix A > O , E [ · ] denotes the expected value of [ · ] and ( · ) means the real part of ( · ) . The first step here is to simplify the matrix A 1 2 ( X M ) B ( X M ) A 1 2 into a convenient form by making a transformation Y = A 1 2 ( X M ) B 1 2 d Y = | A | q 2 | B | p 2 d X from the Lemma 2.1 given below by observing that d ( X M ) = d X since M is a constant matrix. The corresponding integral in the complex domain is the following:
X ˜ f c ( X ˜ ) d X ˜ = c ˜ X | det ( A 1 2 ( X ˜ M ˜ ) B ( X ˜ M ˜ ) * ) | γ [ tr ( A 1 2 ( X ˜ M ˜ ) B ( X ˜ M ˜ ) * A 1 2 ) ] η × e α [ tr ( A 1 2 ( X ˜ M ˜ ) B ( X ˜ M ˜ ) * A 1 2 ) ] δ d X ˜
where A = A * > O , B = B * > O (both Hermitian positive definite), A is p × p , B is q × q and M ˜ = X ˜ . The transformation in the complex case is Y ˜ = A 1 2 ( X ˜ M ˜ ) B 1 2 d Y ˜ = | det ( A ) | q | det ( B ) | p d X ˜ .
Lemma 2.1. 
Let the m × n matrix X = ( x i j ) be in the real domain where the m n elements x i j ’s are functionally independent (distinct) real scalar variables and let A be a p × p and B be a q × q nonsingular constant matrices. Then,
Y = A X B , | A | 0 , | B | 0 d Y = | A | n | B | m d X .
Let the m × n matrix X ˜ be in the complex domain and where A and B be p × p and q × q nonsingular constant matrices respectively in the real or complex domain, then
Y ˜ = A X ˜ B , | A | 0 , | B | 0 d Y ˜ = | det ( A ) | 2 q | det ( B ) | 2 p d X ˜ = | det ( A A * ) | q | det ( B * B ) | p d X ˜
where | det ( · ) | denotes the absolute value of the determinant of ( · ) .
The proof of Lemma 2.1 and other lemmas to follow, may be seen from [12]. When a p × p matrix X is symmetric, X = X then we have a companion result to Lemma 2.1 which will be stated next.
Lemma 2.2. 
Let X = X be a symmetric m × m matrix and let A be a m × m constant nonsingular matrix. Then,
Y = A X A , | A | 0 , d Y = | A | p + 1 d X
and when a m × m matrix X ˜ = X ˜ * in the complex domain is Hermitian and when A is a m × m nonsingular constant matrix in the real or complex domain, then
Y ˜ = A X ˜ A * , | A | 0 , d Y ˜ = | det ( A ) | 2 m d X ˜ = | det ( A A * ) | m d X ˜ .
Now, under Lemma 2.1, ( 2.1 ) reduces to an integral over Y. Let us denote it as f 1 ( Y ) . Then,
Y f 1 ( Y ) d Y = c | A | q 2 | B | p 2 Y | Y Y | γ [ tr ( Y Y ) ] η e α [ tr ( Y Y ) ] δ d Y .
The corresponding integral in the complex case is the following:
Y ˜ f 1 c ( Y ˜ ) d Y ˜ = c ˜ | det ( A ) | q | det ( B ) | p Y ˜ | det ( Y ˜ Y ˜ * ) | γ [ tr ( Y ˜ Y ˜ * ) ] η e α [ tr ( Y ˜ Y ˜ * ) ] δ d Y ˜ .
The function f 1 ( Y ) in the real domain when γ = 0 , η 0 , δ 1 is often referred to as Kotz’ model by most of the authors who use such a model. When the exponent of the determinant γ 0 then the evaluation of the integral over f 1 ( Y ) is very difficult, which will be seen from the computations to follow. When γ 0 , and in the real domain, [7] calls the model also as Kotz’ model but the normalizing constant given by them and claimed to be available in earlier literature, does not seem to be correct. The correct normalizing constant and its evaluation in the real and complex domains will be given in detail below. Since f 2 ( Y ) involves a determinant and a trace where the determinant is a product of eigenvalues and trace is a sum, two elementary symmetric functions, if there a transformation involving elementary symmetric functions then one can handle the determinant and trace together. This author does not know any such transformation. Going through eigenvalues does not seem to be a good option because the Jacobian element will involve a Vandermonde determinant and not very convenient to handle. The next possibility is triangularization and in this case also one can make the determinant as a product of scalar variables and trace as a sum. Then, one can use a general polar coordinate transformation so that the trace becomes a single variable, namely the radial variable r and in the product r and sine and cosine product will be separated also. Hence, this approach will be a convenient one. Continuing with the evaluation of ( 2.2 ) in the real case, we have the following situations: if δ = 1 and η = 0 , or γ = 0 , then one would immediately convert d X into d S , S = X X and integrate out by using a real matrix-variate gamma integral in the case of η = 0 and δ = 1 and integrate out by using the scalar variable gamma integral in the case γ = 0 . This conversion can be done with the help of Lemma 2.3 given below.
Lemma 2.3. 
Let the m × n , m n matrix X of rank m be in the real domain with m n distinct elements x i j ’s. Let the m × m matrix be denoted by S = X X which is positive definite. Then, going through a transformation involving a lower triangular matrix with positive diagonal elements and a semi-orthonormal matrix and after integrating out the differential element corresponding to the semi-orthonormal matrix, we will have the following connection between d X and d S , see the details from [12]:
d X = π m n 2 Γ m ( n 2 ) | S | n 2 m + 1 2 d S
where, for example, Γ m ( α ) is the real matrix-variate gamma function given by
Γ m ( α ) = π m ( m 1 ) 4 Γ ( α ) Γ ( α 1 2 ) . . . Γ ( α m 1 2 ) , ( α ) > m 1 2 = Z > O | Z | α m + 1 2 e tr ( Z ) d Z , ( α ) > m 1 2
where tr ( · ) means the trace of the square matrix ( · ) . Since Γ m ( α ) is associated with the above real matrix-variate gamma integral, we call Γ m ( α ) a real matrix-variate gamma function. This Γ m ( α ) is also known by different names in the literature. When the m × n , m n matrix X ˜ of rank m, with distinct elements, is in the complex domain and letting S ˜ = X ˜ X ˜ * , which is m × m and Hermitian positive definite, then, going through a transformation involving a lower triangular matrix with real and positive diagonal elements and a semi-unitary matrix, we can establish the following connection between d X ˜ and d S ˜ , [12]:
d X ˜ = π m n Γ ˜ m ( n ) | det ( S ˜ ) | n m d S ˜
where, for example, Γ ˜ m ( α ) is the complex matrix-variate gamma function given by
Γ ˜ m ( α ) = π m ( m 1 ) 2 Γ ( α ) Γ ( α 1 ) . . . Γ ( α m + 1 ) , ( α ) > m 1 = Z ˜ > O | det ( Z ˜ ) | α m e tr ( Z ˜ ) d Z ˜ , ( α ) > m 1 .
We call Γ ˜ m ( α ) the complex matrix-variate gamma because it is associated with a matrix-variate gamma integral in the complex domain.
But in our ( 2.2 ) , both the determinant and trace enter as multiplicative factors and there is an exponent δ > 0 for the exponential trace. In order to tackle this situation, we will convert d X to d T , where T is a lower triangular matrix, by using Theorem 2.14 of [12] which is restated here as a lemma. The idea is that in this case | X X | = | T T | becomes product of the squares of the diagonal elements in T only and tr ( T T ) is a sum of squares also. This conversion can also be achieved by converting d S of Lemma 2.3 to a d T by using another result, where T is lower triangular.
Lemma 2.4
Let X be m × n , m n matrix of rank m with functionally independent m n real scalar variables as elements. Let T be a lower triangular matrix and let U 1 be a semi-orthonormal matrix, U 1 U 1 = I p . Consider the transformation X = T U 1 where both T and U 1 are uniquely selected, for example, with the diagonal elements positive in T and with the first column elements positive in U 1 . Then, after integrating out the differential element corresponding the semi-orthonormal matrix U 1 , one has the following connection between d X and d T , [12]:
X = T U 1 d X = π m n 2 Γ m ( n 2 ) { j = 1 m | t j j | n j } d T
and in the complex case, let X ˜ be m × n , m n matrix of rank m with m n distinct elements in the complex domain. Let T ˜ be a lower triangular matrix in the complex domain with the diagonal elements real and positive and U ˜ 1 be a semi-unitary matrix, U ˜ 1 U ˜ 1 * = I p , where T ˜ and U ˜ 1 are uniquely chosen. Then, after integrating out the differential element corresponding to U ˜ 1 , one has the following connection between d X ˜ and d T ˜ :
X ˜ = T ˜ U ˜ 1 d X ˜ = π m n Γ ˜ m ( n ) { j = 1 m ( t j j ) 2 ( n j ) + 1 } d T ˜ .
Let us consider the evaluation of ( 2.2 ) in the real case first. Converting d Y in ( 2.2 ) to d T by using Lemma 2.4, the integral part of ( 2.2 ) over Y, is the following, denoted by f 2 ( T ) :
T f 2 ( T ) d T = c | A | q 2 | B | p 2 π p q 2 Γ p ( q 2 ) T | T T | γ [ tr ( T T ) ] η × e α [ tr ( T T ) ] δ { j = 1 p | t j j | q j } d T .
The corresponding equation in the complex domain is the following:
T ˜ f 2 c ( T ˜ ) d T ˜ = c ˜ | det ( A ) | q | det ( B ) | p π p q Γ p ( q ) T ˜ | det ( T ˜ T ˜ * ) | γ [ tr ( T ˜ T ˜ * ) ] η × e α [ tr ( T ˜ T ˜ * ) ] δ { j = 1 p t j j 2 ( q j ) + 1 } d T ˜ .
Note that, in the real case
| T T | = j = 1 p t j j 2 , tr ( T T ) = j = 1 p t j j 2 + i > j t i j 2
where in j = 1 p t j j 2 there are p terms and second sum has p ( p 1 ) / 2 terms, thus a total of k = p ( p + 1 ) / 2 terms. The corresponding quantity in the complex domain is the following:
| det ( T ˜ T ˜ * ) | = j = 1 p t j j 2 , tr ( T ˜ T ˜ * ) = j = 1 p t j j 2 + j > k | t ˜ j k | 2 , | t ˜ j k | 2 = t j k 1 2 + t j k 2 2
where t ˜ j k = t j k 1 + i t j k 2 , i = ( 1 ) , t j k 1 , t j k 2 real, and in the first sum there are p square terms but in the sum j > k | t ˜ j k | 2 there are a total of 2 [ p ( p 1 ) 2 ] = p ( p 1 ) square terms, thus a total of p 2 square terms in the complex case.
Let us consider a polar coordinate transformation in the real case on all the k = p ( p + 1 ) / 2 terms by using the transformation on page 44 of [12] which is restated here for convenience, that is { t 11 , t 22 , . . . , t p p , t 21 , . . . , t p p 1 } { r , θ 1 , . . . , θ k 1 } , k = p ( p + 1 ) / 2 .
t 11 = r sin θ 1 t 22 = r cos θ 1 sin θ 2 t p p = r cos θ 1 . . . cos θ p 1 sin θ p t 21 = r cos θ 1 . . . cos θ p sin θ p + 1 t p p 1 = r cos θ 1 . . . cos θ k 1
for π 2 < θ j π 2 , j = 1 , . . . , k 2 ; π < θ k 1 π for k = p ( p + 1 ) / 2 in the real case and k = p 2 in the complex case. The structure of the polar coordinate transformation in the complex case remains as in the real case, we will denote it as (2c.4), the only change is that in the real case k = p ( p + 1 ) / 2 and in the complex case k = p 2 . The Jacobian of the transformation in the real case is
d t 11 . . . d t p p 1 = r k 1 { j = 1 k 1 | cos θ j | k 1 j } d r d θ 1 . . . d θ k 1 , k = p ( p + 1 ) / 2
and in the complex case the Jacobian is given by
d t 11 . . . d t p p . . . d t p p 12 = r p 2 1 { j = 1 p 2 1 | cos θ j | p 2 j 1 } d r d θ 1 . . . d θ p 2 1
for the same ranges for θ j ’s as in the real case, but in the complex case k = p 2 .
The normalizing constant c in the real case coming from (2.3) is quoted in [7] by citing earlier works. But none of them seems to have given the evaluation of the integral in (2.3) explicitly. The normalizing constant c given in [7] does not seem to be correct. Since the integral in (2.3) appears in very many places as Kotz’ integral, and used in many disciplines, a detailed evaluation of the integral in (2.3) is warranted. Also, none seems to have given c ˜ in the complex case. Hence, the evaluations of c and c ˜ in the real and complex cases will be given here in detail.
2.1,2c.1 Evaluation of the integral in (2.3) in the real case and (2c.3) in the complex case
Note that i j t i j 2 = r 2 . From the Jacobian part, the factor containing r is r k 1 = ( r 2 ) k 2 1 2 . In the product j = 1 p t j j 2 each t j j 2 contains a r 2 . Also, the Jacobian part j = 1 p | t j j | q j = j = 1 p ( t j j 2 ) q 2 j 2 = | T T | q 2 j 2 . Collecting all r, the exponent of r 2 in the real case is the following:
( γ + q 2 1 2 ) + ( γ + q 2 2 2 ) + . . . + ( γ + q 2 p 2 ) + η + p ( p + 1 ) 4 1 2 = p ( γ + q 2 ) + η 1 2 .
Then, integration over r gives the following:
0 ( r 2 ) p ( ( γ + q 2 ) + η ) 1 2 e α ( r 2 ) δ d r = 1 2 δ Γ [ 1 δ ( p ( γ + q 2 ) + η ) ] α [ 1 δ ( p ( γ + q 2 ) + η ) ]
for ( γ ) > q 2 , ( η ) > 0 , α > 0 , δ > 0 . The corresponding integral over r in the complex domain is the following:
0 ( r 2 ) p ( γ + q ) + η 1 2 e α ( r 2 ) δ d r = 1 2 δ Γ [ 1 δ ( p ( γ + q ) + η ) ] α 1 δ ( p ( γ + q ) + η )
for ( γ ) > q , δ > 0 , α > 0 , ( η ) > 0 .
2.2 Evaluation of the sine and cosine product in the real case
Consider the integration of factors containing the θ j ’s in the real case. These θ j ’s come from j = 1 p t j j 2 and from the Jacobian part. Consider θ 1 . The exponent of sin 2 θ 1 is γ + q 2 1 2 . The exponent of cos 2 θ 1 is ( γ + q 2 2 2 ) + ( γ + q 2 3 2 ) + . . . + ( γ + q 2 p 2 ) = ( p 1 ) ( γ + q 2 ) p ( p + 1 ) 4 + 1 2 and the part coming from the Jacobian | cos θ 1 | k 1 1 . Note that | cos θ 1 | k 1 1 = ( cos 2 θ 1 ) p ( p + 1 ) 4 3 2 | cos θ 1 | . Then, the integral over over θ 1 , denoting the integral over θ 1 as I θ 1 , gives the following, where in all the integrations over the θ j ’s to follow, we will use the transformations x = sin θ j , u = x 2 :
I θ 1 = π 2 π 2 ( sin 2 θ 1 ) γ + q 2 1 2 ( cos 2 θ 1 ) ( p 1 ) ( γ + q 2 ) 1 | cos θ 1 | d θ 1 = 2 0 π 2 ( sin 2 θ 1 ) γ + q 2 1 2 ( cos 2 θ 1 ) ( p 1 ) ( γ + q 2 ) 1 | cos θ 1 | d θ 1 = 2 0 1 ( x 2 ) γ + q 2 1 2 ( 1 x 2 ) ( p 1 ) ( γ + q 2 ) 1 d x = 0 1 u γ + q 2 1 ( 1 u ) ( p 1 ) ( γ + q 2 ) 1 d u = Γ ( γ + q 2 ) Γ [ ( p 1 ) ( γ + q 2 ) ] Γ [ p ( γ + q 2 ) ] , ( γ ) > q 2 .
Now, collecting all factors containing θ 2 and proceeding as in the case of θ 1 we have the following result for the integral over θ 2 :
I θ 2 = Γ ( γ + q 2 1 2 ) Γ [ ( p 2 ) ( γ + q 2 ) + 1 2 ] Γ [ ( p 1 ) ( γ + q 2 ) ] , ( γ ) > q 2 + 1 2 .
Note that the denominator gamma in ( i i ) cancels with one numerator gamma in ( i ) . The pattern of cancellation of the denominator gamma in the next step with one numerator gamma in its previous step will continue leaving only one factor in the numerator and no factor in the denominator, except the very first step involving ( i ) and ( i i ) where the first denominator gamma, namely Γ ( p ( γ + q 2 ) ) is left out. When, integrating θ p 1 , we have the following:
I θ p 1 = Γ ( γ + q 2 p 2 2 ) Γ ( ( γ + q 2 ) p 2 p 1 2 + p ( p + 1 ) 4 ) Γ ( 2 ( γ + q 2 ) p 2 2 p 1 2 p 2 + p ( p + 1 ) 4 ) , ( γ ) > q 2 + p 2 2 .
Note that when considering θ p there is no cosine factor coming from t p p and the cosine factor comes only from the Jacobian part. We can see that
I θ p = Γ ( γ + q 2 p 1 2 ) Γ ( p ( p + 1 ) 4 p 2 ) Γ ( ( γ + q 2 ) p 2 p 1 2 + p ( p + 1 ) 4 ) , ( γ ) > q 2 + p 1 2 .
Again, the denominator gamma in ( i v ) cancels with one numerator gamma in ( i i i ) . This pattern will continue for k = p + 1 , p + 2 , . . . . For the integrals over θ j , j = p + 1 , p + 2 , . . . the only contribution is from the Jacobian part, no sine factor will be there. Consider θ p + 1 . We see that
I θ p + 1 = Γ ( 1 2 ) Γ ( p ( p + 1 ) 4 p + 1 2 ) Γ ( p ( p + 1 ) 4 p 2 ) , p > 2 .
Again, cancellation will hold. Now, consider a few last cases of θ j . For j = p ( p + 1 ) 2 3 = k 3 we have
I θ k 3 = Γ ( 1 2 ) Γ ( 3 2 ) Γ ( 2 )
and for j = k 2
I θ k 2 = Γ ( 1 2 ) Γ ( 1 ) Γ ( 3 2 )
and the last θ j goes from π to π and no contribution from the Jacobian part and hence
I θ k 1 = 2 π .
Note that starting from j = p + 1 to j = k 2 the gamma factor left in the numerator is Γ ( 1 2 ) = π . There are p ( p 1 2 2 such factors and the last one is 2 π , thus the product is 2 π p ( p 1 ) 4 . For j = 1 , . . . , p the factors left out in the numerator are Γ ( γ + q 2 ) Γ ( γ + q 2 1 2 ) . . . Γ ( γ + q 2 p 1 2 ) and for j = p + 1 , . . . , k 1 we have π p ( p 1 ) 4 giving Γ p ( γ + q 2 ) , ( γ ) > q 2 + p 1 2 . For j = 1 there is one gamma left in the denominator, namely, Γ ( p ( γ + q 2 ) ) . Taking the product of integral over all θ j ’s in the real case is
2 Γ p ( γ + q 2 ) / Γ ( p ( γ + q 2 ) )
where Γ p ( · ) is the real matrix-variate gamma defined in Lemma 2.3.
2c.2 Evaluation of the integral over the θ j ’s in the complex case
The sine and cosine functions come from the transformations corresponding to t 11 , . . . , t p p , from the Jacobian when going from X ˜ to T ˜ and from the Jacobian in the polar coordinate transformation. The Jacobian part in the polar coordinate transformation is
j = 1 p 2 1 | cos θ j | p 2 j 1 = { j = 1 p 2 1 ( cos 2 θ j ) p 2 2 j 2 1 | cos θ j | } | T ˜ T ˜ * | γ [ j = 1 p t j j 2 ( q j ) + 1 ] = j = 1 p ( t j j 2 ) γ + q j + 1 2 .
Collecting factors containing θ 1 , observing that sin θ 1 comes from t 11 and cos θ 1 comes from t 22 , . . . , t p p and the Jacobian part. The exponent of sin 2 θ 1 is γ + q 1 2 and the exponent of cos 2 θ 1 is ( p 1 ) ( γ + q + 1 2 ) p ( p + 1 ) 2 + 1 + p 2 2 1 2 1 = ( p 1 ) ( γ + q ) 1 . In all the integrals to follow, π 2 π 2 ( · ) d θ = 2 0 π 2 ( · ) d θ due to evenness of the integrand. Then, we will use the transformations x = sin θ , u = x 2 , the steps parallel to those in the real case. Therefore
π 2 π 2 ( sin 2 θ 1 ) γ + q 1 2 ( cos 2 θ 1 ) ( p 1 ) ( γ + q ) 1 | cos θ 1 | d θ 1 = Γ ( γ + q ) Γ ( ( p 1 ) ( γ + q ) ) Γ ( p ( γ + q ) ) ,
for ( γ ) > q . Collecting the factors containing θ 2 , we note that the exponent of sin 2 θ 2 is γ + q 2 + 1 2 and the exponent of cos 2 θ 2 is ( p 2 ) ( γ + q + 1 2 ) p ( p + 1 ) 2 + 1 + 2 + p 2 2 2 = ( p 2 ) ( γ + q ) . Hence,
2 0 p i 2 ( sin 2 θ 2 ) ( γ + q ) 3 2 ( cos 2 θ 2 ) ( p 2 ) ( γ + q ) + 1 1 | cos θ 2 | d θ 2 = Γ ( γ + q 1 ) Γ ( ( p 2 ) ( γ + q ) + 1 ) Γ ( ( p 1 ) ( γ + q ) ) , ( γ ) > q + 1 .
Note that Γ ( ( p 1 ) ( γ + q ) ) from the denominator of ( i i i ) cancels with the same in the numerator of ( i i ) leaving one gamma, namely Γ ( γ + q ) in the numerator of ( i i ) and one gamma, namely Γ ( p ( γ + q ) ) in the denominator of ( i i ) . The pattern of cancellation of the gamma in the denominator of a step canceling with a gamma in the numerator of the previous step will continue as seen in the real case. Let us check for j = p , j = p + 1 to see whether the pattern is continuing, where in j = p + 1 there is no contribution of sine function, the only cosine function coming is from the Jacobian part. For j = p we have
θ p ( sin 2 θ p ) γ + q + 1 2 p ( cos 2 θ p ) p ( p 1 ) 2 1 | cos θ p | d θ p = Γ ( γ + q p + 1 ) Γ ( p ( p 1 ) 2 ) Γ ( γ + 1 p + 1 + p ( p 1 ) 1 ) .
For j = p + 1
θ p + 1 ( cos 2 θ p + 1 ) p 2 p 3 2 | cos θ p + 1 | d θ p + 1 = Γ ( 1 2 ) Γ ( p ( p 1 ) 2 1 2 ) Γ ( p ( p 1 ) 2 ) .
The pattern of cancelation is continuing. But, starting from j = p + 1 , . . . , p 2 2 the factor left out in the numerator is Γ ( 1 2 ) = π and the last factor gives 2 π because the range here is π < θ p 2 1 π and hence the factors left out in the numerator are ( π ) p ( p 1 ) Γ ( γ + q ) Γ ( γ + q 1 ) . . . Γ ( γ + q p + 1 ) = Γ ˜ p ( γ + q ) , ( γ ) > q + p 1 and one gamma, namely Γ ( p ( γ + q ) ) is left out in the denominator of the integration over θ 1 . Hence, the integration over all sine and cosine functions in the complex case is
2 Γ ˜ p ( γ + q ) Γ ( p ( γ + q ) )
where Γ ˜ p ( · ) is the complex matrix-variate gamma function defined in Lemma 2.3. Then the final result of integration over r and integration over all θ j ’s in the real case is the following:
X f ( X ) d X = X c | A 1 2 ( X M ) B ( X M ) A 1 2 | γ [ tr ( A 1 2 ( X M ) B ( X M ) A 1 2 ) ] η × e α [ tr ( A 1 2 ( X M ) B ( X M ) A 1 2 ) ] δ d X = c 1 2 δ Γ [ 1 δ ( p ( γ + q 2 ) + η ) ] α 1 δ [ p ( γ + q 2 ) + η ] × 2 Γ p ( γ + q 2 ) Γ ( p ( γ + q 2 ) ) | A | q 2 | B | p 2 π p q 2 Γ p ( q 2 ) = c | A | q 2 | B | p 2 π p q 2 δ Γ p ( q 2 ) α 1 δ ( p ( γ + q 2 ) + η ) Γ [ 1 δ ( p ( γ + q 2 ) + η ) ] Γ p ( γ + q 2 ) Γ ( p ( γ + q 2 ) )
for ( γ ) > q 2 + p 1 2 , ( η ) > 0 , δ > 0 , α > 0 , p q , M = E [ X ] ,and A > O , B > O are constant matrices where A is p × p and B is q × q and X is p × q , p q real matrix of rank p, and the corresponding integral in the complex case is given by
X ˜ f c ( X ˜ ) d X ˜ = c ˜ π p q δ Γ ˜ p ( q ) | det ( A ) | q | det ( B ) | p α 1 δ ( p ( γ + q ) + η ) Γ [ 1 δ ( p ( γ + q ) + η ) ] Γ ˜ p ( γ + q ) Γ ( p ( γ + q ) )
for ( γ ) > q + p 1 , ( η ) > 0 , δ > 0 , α > 0 , M ˜ = E [ X ˜ ] , A = A * > O , B = B * > O , p q . Therefore the normalizing constants c and c ˜ are the following:
c = | A | q 2 | B | p 2 δ Γ ( p ( γ + q 2 ) ) α 1 δ ( p ( γ + q 2 ) + η ) Γ p ( q 2 ) Γ p ( γ + q 2 ) Γ [ 1 δ ( p ( γ + q 2 ) + η ) ] π p q 2 , ( γ ) > q 2 + p 1 2 ,
for δ > 0 , α > 0 , ( η ) > 0 , p q and
c ˜ = | det ( A ) | q | det ( B ) | p δ Γ ˜ p ( q ) α 1 δ ( p ( γ + q ) + η ) Γ ( p ( γ + q ) ) π p q Γ ˜ p ( γ + q ) Γ [ 1 δ ( p ( γ + q ) + η ) ] , ( γ ) > q + p 1
for δ > 0 , α > 0 , ( η ) > 0 , p q .
From the general results in (2.6, 2c.6) we can have the following interesting special cases: A = I , η = 0 ; η = 0 , B = I ; A = I , B = I , η = 0 ; γ = 0 ; γ = 0 , η = 0 ; η = 0 , δ = 1 . Since the integral over the sine and cosine product is very important in many types of applications, we will give these as theorems here.
Theorem 2.1. 
Let Θ k = { θ 1 , . . . , θ k 1 } , k = p ( p + 1 ) 2 in the real case, and integral over Θ k denoted by I Θ k , is the following:
I Θ k = Θ { j = 1 p ( c o s 2 θ 1 cos 2 θ 2 . . . cos 2 θ j 1 sin 2 θ j ) γ + q 2 j 2 | cos θ j | k 1 j } { j = p + 1 k 1 | cos θ j | k 1 j } d Θ = 2 Γ p ( γ + q 2 ) Γ ( p ( γ + q 2 ) ) , ( γ ) > q 2 + p 1 2 , p q , p 2 , k = p ( p + 1 ) 2 .
The corresponding result in the complex case is the following, where Θ k here has the same format as in the real case but here k = p 2 .
Theorem 2c.1
Let Θ k = { θ 1 , . . . , θ k 1 } , k = p 2 . The integral over Θ k in the complex case is the following:
I Θ k = Θ k { j = 1 p ( c o s 2 θ 1 cos 2 θ 1 . . . cos 2 θ j 1 sin 2 θ j ) γ + q j + 1 2 ( cos 2 θ j ) p 2 j 2 2 | cos θ j | } × { j = p + 1 p 2 1 ( cos 2 θ j ) p 2 j 2 2 | cos θ j | } d Θ k = 2 Γ ˜ p ( γ + q ) Γ ( p ( γ + q ) ) , ( γ ) > q + p 1 , p q , p 2 , k = p 2 .
From (2.6) in the real case we have the following theorems:
Theorem 2.2. 
Let Y be a p × q , p q matrix of rank p with the p q elements functionally independent real scalar variables. For δ > 0 , ( η ) > 0 , ( γ ) > q 2 + p 1 2 ,
Y | Y Y | γ [ tr ( Y Y ) ] η e α [ tr ( Y Y ) ] δ d Y = π p q 2 Γ p ( q 2 ) Γ [ 1 δ ( p ( γ + q 2 ) + η ) ] Γ p ( γ + q 2 ) δ Γ ( p ( γ + q 2 ) ) α 1 δ ( p ( γ + q 2 ) + η ) .
Note 2.1. 
In the widely used normalizing constant in [7], which was quoted from earlier references, and corresponding to the normalizing constant c in (2.3) above, a gamma factor, namely, Γ ( p ( γ + q 2 ) ) in the denominator is missing. Either it is a computational slip or due to the use of some wrong results in the derivation of the normalizing constant in [7] and in earlier works of others.
Theorem 2c.2. 
Let Y ˜ be a p × q , p q matrix in the complex domain with rank p where the p q elements are functionally independent complex scalar variables. For δ > 0 , α > 0 , ( η ) > 0 , ( γ ) > q + p 1
Y ˜ | det ( Y ˜ Y ˜ * ) | γ [ tr ( Y ˜ Y ˜ * ) ] η e α [ tr ( Y ˜ Y ˜ * ) ] δ d Y ˜ = π p q Γ ˜ p ( q ) Γ ( 1 δ ( p ( γ + q ) + η ) ) Γ ˜ p ( γ + q ) δ Γ ( p ( γ + q ) ) α 1 δ ( p ( γ + q ) + η ) .
Corollary 2.1. 
For Y , δ , γ , α as defined in Theorem 2.2,
Y | Y Y | γ e α [ tr ( Y Y ) ] δ d Y = π p q 2 Γ p ( γ + q 2 ) Γ ( p δ ( γ + q 2 ) ) δ Γ p ( q 2 ) Γ ( p ( γ + q 2 ) ) α p δ ( γ + q 2 )
for ( γ ) > q 2 + p 1 2 , δ > 0 , p q .
The results quoted from some earlier works of others and reported in [7], corresponding to our Theorems 2.2 and Corollary 2.1, do not agree with our results.
The result corresponding Corollary 2.1 in the complex case is the following:
Corollary 2c.1
For Y ˜ , δ , γ , α as defined in Theorem 2c.2,
Y ˜ | det ( Y ˜ Y ˜ * ) | γ e α [ tr ( Y ˜ Y ˜ * ) ] δ d Y ˜ = π p q Γ ˜ p ( γ + q ) Γ ( p δ ( γ + q ) ) δ Γ ˜ p ( q ) Γ ( p ( γ + q ) ) α p δ ( γ + q )
for ( γ ) > q + p 1 , δ > 0 , α > 0 .
Corollary 2.2. 
For Y , γ , α as defined in Theorem 2.2,
Y | Y Y | γ e α [ tr ( Y Y ) ] d Y = π p q 2 Γ p ( q 2 ) Γ p ( γ + q 2 ) α p ( γ + q 2 ) , ( γ ) > q 2 + p 1 2 , α > 0 , p q .
The corresponding result in the complex domain is the following:
Corollary 2c.2
For Y ˜ , γ , α as defined in Theorem 2c.2,
Y ˜ | det ( Y ˜ Y ˜ * ) | γ e α [ tr ( Y ˜ Y ˜ * ) ] d Y ˜ = π p q Γ ˜ p ( q ) Γ ˜ p ( γ + q ) α p ( γ + q )
for ( γ ) > q + p 1 , α > 0 .
Theorem 2.3. 
Let U = ( u i j ) > O be a p × p real positive definite matrix with p ( p + 1 ) / 2 functionally independent real scalar variables u i j ’s. Then, the following integral over U > O is equivalent to the integral over Y where Y is p × q , p q and of rank p with p q distinct real scalar elements. Then,
U > O | U | γ + q 2 p + 1 2 [ tr ( U ) ] η e α [ tr ( U ) ] δ d U
= Γ p ( q 2 ) π p q 2 Y | Y Y | γ [ tr ( Y Y ) ] η e α [ tr ( Y Y ) ] δ d Y
for α > 0 , δ > 0 , ( η ) > 0 , ( γ ) > q 2 + p 1 2 .
This result enables us to go back and forth from a real full-rank rectangular matrix to a real positive definite matrix. The proof is straightforward. Let Y Y = G . Then G = G > O . Then, from Lemma 2.3, d Y = π p q 2 Γ p ( q 2 ) | G | q 2 p + 1 2 which establishes the result. The corresponding result in the complex domain is the following:
Theorem 2c.3. 
Let the p × p matrix in the complex domain U ˜ = ( u ˜ i j ) = U ˜ * > O Hermitian positive definite, where the p ( p + 1 ) / 2 distinct scalar complex variables be the elements u ˜ i j ’s. Then, the following integral over U ˜ is equivalent to the integral over Y ˜ where Y ˜ is a p × q , p q matrix in the complex domain of rank p with distinct p q complex variables as elements. Then,
U ˜ > O | det ( U ˜ ) | γ + q p [ tr ( U ˜ ) ] η e α [ tr ( U ˜ ) ] δ d U = Γ ˜ p ( q ) π p q Y ˜ | det ( Y ˜ Y ˜ * ) | γ [ tr ( Y ˜ Y ˜ * ) ] η e α [ tr ( Y ˜ Y ˜ * ) ] δ d Y ˜
for α > 0 , ( η ) > 0 , ( γ ) > q + p 1 .
The proof is parallel to that in the real case. Here we use Lemma 2.3 in the complex case, that is all the difference.

3. Some Integrals involving Type-2 Beta Form

Let X be a p × 1 vector in the real domain with distinct scalar variables as elements. Then, we have the following multivariate type-2 beta density:
Theorem 3.1
X ( X X ) δ [ 1 + α ( X X ) η ] γ d X = π p 2 Γ ( p 2 ) Γ [ 1 η ( δ + p 2 ) ] Γ ( γ 1 η ( δ + p 2 ) ) η Γ ( γ ) α 1 η ( δ + p 2 )
for ( γ ) > 1 η ( ( δ ) + p 2 ) , η > 0 , ( δ ) > p 2 , α > 0 .
This result is easily seen from Lemma 2.3. Note that d X = π p 2 Γ ( p 2 ) u p 2 1 d u . Let v = u η . Now, integrate out by using a scalar variable type-2 beta integral to establish the result. Integrand of the left side divided by the right side gives a statistical density. One can generalize the result in Theorem 3.1 by replacing X X by ( X μ ) A ( X μ ) where μ = E [ X ] , A > O is a constant positive definite matrix. Then, the only change is that the right side of ( i ) is multiplied by | A | 1 2 the positive definite square root of the positive definite matrix A > O . The result corresponding to Theorem 3.1 in the complex domain will be stated next without any proof because the derivation is parallel to that in the real case.
Theorem 3c.1
Let X ˜ be a p × 1 vector of p distinct scalar complex variables as elements. Then, we have the following multivariate type-2 beta density:
X ˜ [ X ˜ * X ˜ ] δ ( 1 + α [ X ˜ * X ˜ ] η ) γ d X ˜ = π p Γ ( p ) Γ ( 1 η ( δ + p ) ) Γ ( γ 1 η ( δ + p ) ) η Γ ( γ ) α 1 η ( δ + p )
for ( γ ) > 1 η ( ( δ ) + p ) , η > 0 , ( δ ) > p , α > 0 .
Now, we consider the evaluation of a rectangular matrix-variate type-2 beta integral in the real case.
Theorem 3.2. 
Let X = ( x i j ) be p × q , p q matrix in the real domain of rank p with p q distinct real scalar variables as elements x i j ’s. Then,
X [ tr ( X X ) ] δ [ 1 + α [ tr ( X X ) ] η ] γ d X = π p q 2 Γ ( p q 2 ) Γ [ 1 η ( δ + p q 2 ) ] Γ ( γ 1 η ( δ + p q 2 ) ) η Γ ( γ ) α 1 η ( δ + p q 2 )
for ( δ ) > p q 2 , ( γ ) > 1 η ( ( δ ) + p q 2 ) , η > 0 , α > 0 .
Note that tr ( X X ) is the sum of squares of p q elements. Then, from Lemma 2.3 the result follows, or from Theorem 3.1 the result follows by replacing p 2 by p q 2 . A more general situation is to replace X X by A 1 2 ( X M ) B ( X M ) A 1 2 where M = E [ X ] , A > O , B > O where A is p × p and B is q × q constant positive definite matrices. In this case the only change will be to multiply the right side of ( i i ) by | A | q 2 | B | p 2 . The result corresponding to Theorem 3.2 in the complex domain will be stated next without the details because the details as well as the generalizations are parallel to those in the real case.
Theorem 3c.2
Let X ˜ = ( x ˜ i j ) be p × q , p q matrix in the complex domain of rank p with p q distinct scalar complex variables as elements. Then,
X ˜ [ tr ( X ˜ X ˜ * ) ] δ ( 1 + α [ tr ( X ˜ X ˜ * ) ] η ) γ d X ˜ = π p q Γ ( p q ) Γ [ 1 η ( δ + p q ) ] Γ ( γ 1 η ( δ + p q ) ) η Γ ( γ ) α 1 η ( δ + p q )
for ( δ ) > p q , ( γ ) > 1 η ( ( δ ) + p q ) , η > 0 , α > 0 .
The next result involves a determinant
Theorem 3.3. 
Let X = ( x i j ) be a p × q , p q real matrix of rank p where the elements x i j ’s are distinct real scalar variables. Then
X | X X | γ [ 1 + α [ tr ( X X ) ] δ ] ρ d X = Γ ( p δ ( γ + q 2 ) ) Γ ( ρ p δ ( γ + q 2 ) ) δ α p δ ( γ + q 2 ) π p q 2 Γ p ( q 2 ) Γ p ( γ + q 2 ) Γ ( p ( γ + q 2 ) )
for α > 0 , δ > 0 , ( ρ ) > p δ ( γ + q 2 ) , ( γ ) > q 2 + p 1 2 .
Proof: 
Let X = T U 1 where T is a lower triangular matrix and U 1 is a semi-orthonormal matrix, U 1 U 1 = I p and let T and U 1 be uniquely chosen. Then, from Lemma 2.4
d X = π p q 2 Γ p ( q 2 ) { j = 1 p | t j j | q j } d T , X X = T T .
Note that | X X | = | T T | = j = 1 p t j j 2 and tr ( T T ) = j = 1 p t j j 2 + i > j t i j 2 = sum of squares of p ( p + 1 ) / 2 real scalar variables. Now, apply a polar coordinate transformation on these p ( p + 1 ) / 2 variables t i j ’s, taking
[ t 11 , t 22 , . . . , t p p , t 21 , . . . t p p 1 ] [ r , θ 1 , . . . , θ k 1 ] , k = p ( p + 1 ) / 2 .
Collecting all factors containing r, we have ( r 2 ) p ( γ + q 2 ) 1 2 . Now, integrating over r we have
0 ( r 2 ) p ( γ + q 2 ) 1 2 [ 1 + α ( r 2 ) δ ] ρ d r = Γ [ p δ ( γ + q 2 ) ] Γ ( ρ p δ ( γ + q 2 ) ) 2 δ Γ ( ρ ) α p δ ( γ + q 2 )
for δ > 0 , ( γ ) > q 2 + p 1 2 , ( ρ ) > p δ ( ( γ ) + q 2 ) , α > 0 . From Theorem 2.1, the integral over the θ j ’s gives 2 Γ p ( γ + q 2 ) / Γ ( p ( γ + q 2 ) ) and from the transformation of X X to T T we have π p q 2 / Γ p ( q 2 ) . Hence the product of these three quantities establish the theorem. Result corresponding to Theorem 3.3 in the complex domain will be given next without the proof. The proof goes parallel to that in the real case. In this connection, observe the derivation of the sine and cosine factors in the complex case given earlier, the number of terms in tr ( T ˜ T ˜ * ) will be p 2 in the complex case and it is p ( p + 1 ) / 2 in the real case.
Theorem 3c.3
Let X ˜ = ( x ˜ i j ) be a p × q , p q matrix in the complex domain of rank p and with p q distinct scalar complex variables as elements. Then,
X ˜ | det ( X ˜ X ˜ * ) | γ [ 1 + α [ tr ( X ˜ X ˜ * ) ] δ ] ρ d X ˜ = Γ ( p δ ( γ + q ) ) Γ ( ρ p δ ( γ + q ) ) δ α p δ ( γ + q ) π p q Γ ˜ p ( γ + q ) Γ ˜ p ( q ) Γ ( p ( γ + q ) )
for α > 0 , δ > 0 , ( ρ ) > p δ ( γ + q ) , ( γ ) > q + p 1 , where Γ ˜ p ( · ) is the complex matrix-variate gamma.
The next result will involve a determinant and trace raised to an arbitrary power in the numerator.
Theorem 3.4. 
Let X , p , q , δ and ρ be as defined in Theorem 3.3. Let ( η ) > 0 Then
X | X X | γ [ tr ( X X ) ] η [ 1 + α [ tr ( X X ) δ ] ] ρ d X = π p q 2 Γ p ( γ + q 2 ) Γ p ( q 2 ) Γ ( p ( γ + q 2 ) ) × Γ ( 1 δ ( p ( γ + q 2 ) + η ) ) Γ ( ρ 1 δ ( p ( γ + q 2 ) + η ) ) δ α 1 δ ( p ( γ + q 2 ) + η )
for ( ρ ) > 1 δ ( p ( γ + q 2 ) + η ) , δ > 0 , ( γ ) > q 2 + p 1 2 , ( η ) > 0 , α > 0 .
The corresponding result in the complex domain is the following:
Theorem 3c.4
Let X ˜ , p , q , δ and ρ be as defined in Theorem 3c.3. Let ( η ) > 0 . Then,
X ˜ | det ( X ˜ X ˜ * ) | γ [ tr ( X ˜ X ˜ * ) ] η [ 1 + α [ tr ( X ˜ X ˜ * ) ] δ ] ρ = π p q Γ ˜ p ( γ + q ) Γ ˜ p ( q ) Γ ( p ( γ + q ) ) × Γ ( 1 δ ( p ( γ + q ) + η ) ) Γ ( ρ 1 δ ( p ( γ + q ) + η ) ) δ α 1 δ ( p ( γ + q ) + η )
for ( ρ ) > 1 δ ( p ( ( γ ) + q ) + ( η ) ) , δ > 0 , ( η ) > 0 , α > 0 , ( γ ) > q + p 1 .
Note 3.1.
Theorems 3.2,3.3,3.4 can be generalized by replacing X X by A 1 2 ( X M ) B ( X M ) A 1 2 where M = E [ X ] , A > O , B > O are respectively p × p and q × q constant positive definite matrices and A 1 2 is the positive definite square root of A > O . The only change is that the right side of the equations are to be multiplied by | A | q 2 | B | p 2 . If α 0 and if ρ = ξ α , ξ > 0 then
[ 1 + α [ tr ( X X ) ] δ ] ξ α e ξ [ tr ( X X ) ] δ
then the integrands in Theorems 3.2,3.3,3.4 are Mathai’s pathway model [1]. Then, the models in Section 3 go to the models in Section 2. In the complex case one can have the corresponding generalizations. X ˜ X ˜ * may be replaced by A 1 2 ( X ˜ M ˜ B ( X ˜ M ˜ ) * A 1 2 where M ˜ = E [ X ˜ ] , A = A * > O , B = B * > O (both Hermitian positive definite) and A 1 2 denotes the Hermitian positive definite square root of the Hermitian positive definite matrix A. Now, we consider a generalized logistic format or an exponentiated beta form of a matrix-variate integral.
Theorem 3.5. 
Let X be p × q , p q matrix of rank p where the p q elements are distinct real scalar variables. Then,
X | X X | γ [ tr ( X X ) ] η e α [ tr ( X X ) ] δ ( 1 + a e [ tr ( X X ) ] δ ) α + β d X = π p q 2 Γ p ( γ + q 2 ) Γ p ( q 2 ) Γ [ 1 δ ( p ( γ + q 2 ) + η ) ] δ Γ ( p ( γ + q 2 ) ) × ζ [ { ( 1 δ ( p ( γ + q 2 ) + η ) , α ) } : α + β ; ; a ]
for δ > 0 , 0 < a < 1 , ( α ) > 0 , ( β ) > 0 , ( γ ) > q 2 + p 1 2 , ( η ) > 0 and ζ [ · ] is Mathai’s extended zeta function defined in [13], which is also given in Note 3.2 below.
Proof: 
Since 0 < a e [ tr ( X X ) ] δ < 1 , one can use a binomial expansion
[ 1 + a e [ tr ( X X ) ] δ ] ( α + β ) = k = 0 ( α + β ) k ( a ) k k ! e k [ tr ( X X ) ] δ
and e α [ tr ( X X ) ] δ e k [ tr ( X X ) ] δ = e ( α + k ) [ tr ( X X ) ] δ . Now, apply Theorem 2.2 to see the result.
Note 3.2. 
For the real scalar variable x the logistic density is
e x ( 1 + e x ) 2 = e x ( 1 + e x ) 2 , < x < .
This density behaves like a standard Gaussian density but the logistic density has a thicker tail compared to that of the standard Gaussian. Hence, in many industrial applications, a logistic model is preferred to a standard Gaussian model. A generalized logistic density was introduced by [14] which is the following:
f ( x ) = Γ ( α + β ) Γ ( α ) Γ ( β ) e α x ( 1 + e x ) α + β , < x < .
This model in ( i i ) is more viable and asymmetric situations can also be covered under this generalized model ( i i ) compared to ( i ) . Note that for α = 1 = β in ( i i ) we have ( i ) . Hence, the matrix-variate analogues of logistic-based models are connected to the generalized logistic density in ( i i ) above. The model in ( i i ) is the exponentiated type-2 beta density. Make the transformation x = e y in a type-2 beta density to go to model ( i i ) . Matrix-variate versions of logistic-based densities usually end up in an extended form of a generalized zeta function. The zeta function ζ ( ρ ) and the generalized zeta function ζ ( ρ , α ) , available in the literature, are the following:
z ( ρ ) = j = 1 1 k ρ , ( ρ ) > 1 ; ζ ( ρ , α ) = j = 0 1 ( α + k ) ρ , ( ρ ) > 1 , α 0 , 1 , . . . .
The extended zeta function defined by [13] is the following:
ζ p , q r ( x ) = ζ [ { ( m 1 , α 1 ) , . . . , ( m r , α r ) } : a 1 , . . . , a p ; b 1 , . . . , b q ; x ] = k = 0 1 ( α 1 + k ) m 1 . . . ( α r + k ) m r ( a 1 ) k . . . ( a p ) k ( b 1 ) k . . . ( b q ) k x k k !
for j = 1 r m r > 1 , α j 0 , 1 , . . . , j = 1 , . . . , r ; b j 0 , 1 , . . . , j = 1 , . . . , q ; q p or p = q + 1 and | x | < 1 , where for example, ( a ) k is the Pochhamer symbol defined as ( a ) k = a ( a + 1 ) . . . ( a + k 1 ) , a 0 , ( a ) 0 = 1 . The result corresponding to Theorem 3.5 in the complex case is the following: Details of the derivations are parallel to those in the real case and hence not given here.
Theorem 3c.5
Let X ˜ be a p × q , p q matrix in the complex domain of rank p with p q distinct scalar complex variables as elements. Then,
X ˜ | det ( X ˜ X ˜ * ) | γ [ tr ( X ˜ X ˜ * ) ] η e α [ tr ( X ˜ X ˜ * ) ] δ ( 1 + a e [ tr ( X ˜ X ˜ * ) ] δ ) α + β d X ˜ = π p q Γ ˜ p ( γ + q ) Γ ( 1 δ ( p ( γ + q ) + η ) ) δ Γ ˜ p ( q ) Γ ( p ( γ + q ) ) ζ [ { ( ( 1 δ ( p ( γ + q ) + η ) ) , α ) } : α + β ; ; a ]
for δ > 0 , 0 < a < 1 , ( α ) > 0 , ( β ) > 0 , ( γ ) > q + p 1 , ( η ) > 0 and ζ [ · ] is defined in Theorem 3.5 above.

4. Matrix-variate Type-1 Beta Forms

Let X be a p × 1 vector of distinct real scalar variables. Consider the following multivariate function
f 1 ( X ) = c 1 [ X X ] γ [ 1 a ( X X ) δ ] β 1 , ( β ) > 0 , a > 0 , a ( X X ) δ < 1
that is, X is confined to the interior of the p-dimensional sphere of radius ( 1 a ) 1 δ and f 1 ( X ) is assumed to be zero outside this sphere. If c 1 is the normalizing constant there so that f 1 ( X ) is a density, let us compute c 1 . Let u = X X d X = π p 2 Γ ( p 2 ) u p 2 1 d u by Lemma 2.3. Let v = u δ d u = 1 δ v 1 δ 1 d v . Then,
1 = X f 1 ( X ) d X = c 1 π p 2 Γ ( p 2 ) u u γ + p 2 1 ( 1 a u δ ) β 1 d u = c 1 π p 2 Γ ( p 2 ) 1 δ v v 1 δ ( γ + p 2 ) 1 ( 1 a v ) β 1 d v = c 1 π p 2 Γ ( p 2 ) 1 δ Γ ( 1 δ ( γ + p 2 ) ) Γ ( β ) a 1 δ ( γ + p 2 ) Γ ( β + 1 δ ( γ + p 2 ) ) , ( γ ) > p 2 c 1 = δ Γ ( p 2 ) Γ ( β + 1 δ ( γ + p 2 ) ) a 1 δ ( γ + p 2 ) π p 2 Γ ( β ) Γ ( 1 δ ( γ + p 2 ) ) , ( γ ) > p 2
for 0 < a < 1 , δ > 0 , ( β ) > 0 . Note that f 1 ( X ) is also the density connected with type-1 beta distributed isotropic random points in geometrical probability problems, see [15]. Corresponding format in Section 3 is associated with type-2 beta distributed random points. A more general model is available by replacing X X by ( X μ ) A ( X μ ) , μ = E [ X ] , A > O is a p × p real constant positive definite matrix. The only change will be that the normalizing constant c 1 will be multiplied by | A | 1 2 and the structure of the function remains the same.
Theorem 4.1. 
Let X be a p × 1 real vector of distinct real scalar variables as elements. Consider the quadratic form ( X μ ) A ( X μ ) , E [ X ] = μ , A > O where A is a p × p constant positive definite matrix. Let 0 < a < 1 , δ > 0 , 0 < a [ ( X μ ) A ( X μ ) ] δ < 1 , ( β ) > 0 , ( η ) > 0 . Then, c 1 in
f 1 ( X ) = c 1 [ tr ( ( X μ ) A ( X μ ) ) ] η [ 1 a [ tr ( ( X μ ) A ( X μ ) ) ] δ ] β 1 ,
for a [ tr ( ( X μ ) A ( X μ ) ) ] δ < 1 , and f 1 ( X ) = 0 elsewhere, is given by
c 1 = δ | A | 1 2 Γ ( p 2 ) a 1 δ ( γ + p 2 ) Γ ( β + 1 δ ( γ + p 2 ) ) π p 2 Γ ( β ) Γ ( 1 δ ( γ + p 2 ) ) , ( γ ) > p 2 .
The density and the normalizing constant in the complex case, corresponding to Theorem 4.1, are the following. The evaluation of the normalizing constant is parallel to that in the real case and hence only the results are given here.
Theorem 4c.1
Let X ˜ be a p × 1 vector in the complex domain with distinct scalar complex variables as elements. Let u ˜ = ( X ˜ μ ˜ ) * A ( X ˜ μ ˜ ) , a Hermitian form where μ ˜ = E [ X ] , A = A * > O is a constant Hermitian positive definite matrix. Note that the Hermitian form u ˜ is real and hence the following function f 1 c ( X ˜ ) is real-valued and hence a density when c ˜ 1 is the normalizing constant there. Let 0 < a < 1 , δ > 0 , 0 < u ˜ < 1 , ( β ) > 0 , ( η ) > 0 . Then, the density f 1 c ( X ˜ ) and the normalizing constant c ˜ 1 are the following:
f 1 c ( X ˜ ) = c ˜ 1 [ u ] η [ 1 a u δ ] β 1 , u = ( X ˜ μ ˜ ) * A ( X ˜ μ ˜ ) , a < u δ < 1
and f 1 c ( X ˜ ) = 0 elsewhere, and
c ˜ 1 = δ | det ( A ) | Γ ( p ) a 1 δ ( γ + p ) Γ ( β + 1 δ ( γ + p ) ) π p Γ ( β ) Γ ( 1 δ ( γ + p ) ) , ( γ ) > p .
Now, consider X = ( x i j ) a p × q , p q matrix of rank p where the p q elements x i j ’s are distinct real scalar variables. Consider the model
f 2 ( X ) = c 2 [ tr ( X X ) ] η [ 1 a [ tr ( X X ) ] δ ] β 1 ,
for a > 0 , δ > 0 , ( β ) > 0 , ( η ) > p q 2 , a [ tr ( X X ) ] δ < 1 and f 2 ( X ) = 0 outside this sphere. Note that tr ( X X ) = sum of squares of p q real scalar variables here and u = tr ( X X ) d X = π p q 2 Γ ( p q 2 ) u p q 2 1 d u . Then, proceeding as in the derivation of c 1 in f 1 ( X ) , or in Theorem 4.1, we have the following:
Theorem 4.2. 
Let f 2 ( X ) be as defined in (4.2) for X a real p × q , p q matrix of rank p, then the normalizing constant c 2 in f 2 ( X ) is given by
c 2 = Γ ( p q 2 ) π p q 2 δ Γ ( β + 1 δ ( η + p q 2 ) ) a 1 δ ( η + p q 2 ) Γ ( β ) Γ ( 1 δ ( γ + p q 2 ) )
for ( β ) > 0 , a > 0 , δ > 0 , ( η ) > p q 2 .
In the corresponding complex case the result is the following:
Theorem 4c.2
Let X ˜ be p × q , p q matrix in the complex domain of rank p where the p q elements are distinct scalar complex variables. Then, the following function f 2 c ( X ˜ ) is a density:
f 2 c ( X ˜ ) = c ˜ 2 [ tr ( X ˜ X ˜ * ) ] η [ 1 a [ tr ( X ˜ X ˜ * ) ] δ ] β 1 , 0 < a [ tr ( X ˜ X ˜ * ) ] δ < 1
for a > 0 , δ > 0 , ( η ) > p q , ( β ) > 0 and f 2 c ( X ˜ ) = 0 elsewhere, where
c ˜ 2 = Γ ( p q ) δ Γ ( β + 1 δ ( γ + p q ) ) a 1 δ ( γ + p q ) π p q Γ ( β ) Γ ( 1 δ ( γ + p q ) )
for ( β ) > 0 , 0 < a < 1 , δ > 0 , ( η ) > p q .
A more general model is available in the real case by replacing X X by A 1 2 ( X M ) B ( X M ) A 1 2 where A > O , B > O are constant p × p and q × q matrices respectively and M = E [ X ] . Consider the transformation Y = A 1 2 ( X M ) B 1 2 d Y = | A | q 2 | B p 2 d X by Lemma 2.1. Then, the density of Y is the same as f 2 ( X ) of (4.3). Hence, the only change will be that the normalizing constant in (4.3) is multiplied by | A | q 2 | B | p 2 . Therefore this case is not listed here separately. A similar comment holds in the complex case also. In the complex case the multiplicative factor is | det ( A ) | q | det ( B ) | p .
Note 4.1. 
From the model in Theorem 4.2, one can easily evaluate the density of [ tr ( X X ) ] or in the general case the density of [ tr ( A 1 2 ( X M ) B ( X M ) A 1 2 ) ] from the normalizing constant c 2 . Treating c 2 = c 2 ( η ) , we have E [ [ tr ( X X ) ] h ] = c 2 ( η ) c 2 ( η + h ) for an arbitrary h. Hence, from the inverse Mellin transform one has the density of tr ( X X ) or that in the general case. The same comment holds for Theorem 4.1 also. Similar comments hold in the complex case also.
If the multiplicative factor [ tr ( X X ) ] η in the real case is replaced by a determinant | X X | γ let us see what happens to such a model. Again, let X be a p × q , p q matrix of rank p with distinct p q real scalar variables as elements. Consider the model
f 3 ( X ) = c 3 | X X | γ [ 1 a [ tr ( X X ) ] δ ] β 1 ,
for ( β ) > 0 , 0 < a < 1 , a [ tr ( X X ) ] δ < 1 , δ > 0 , ( γ ) > q 2 + p 1 2 and f 3 ( X ) = 0 elsewhere. Then, we have the following result:
Theorem 4.3. 
Let X and the parameters be as defined in (4.4). Then,
c 3 = Γ p ( q 2 ) δ a p δ ( γ + q 2 ) Γ ( β + p δ ( γ + q 2 ) ) Γ ( p ( γ + q 2 ) ) π p q 2 Γ ( β ) Γ ( p δ ( γ + q 2 ) ) Γ p ( γ + q 2 )
for ( γ ) > q 2 + p 1 2 , 0 < a < 1 , δ > 0 , ( β ) > 0 , p q .
Proof: 
Let X = T U 1 where T = ( t i j ) is a lower triangular matrix and U 1 is a semi-orthonormal matrix, U 1 U 1 = I p , where both T and U 1 uniquely chosen. Then, from Lemma 2.4, after integrating out the differential element corresponding to the semi-orthonormal matrix U 1 , one has the relationship
d X = π p q 2 Γ p ( q 2 ) [ j = 1 p | t j j | q j ] d T , X X = T T .
Note that | X X | = | T T | = j = 1 p t j j 2 and
| T T | γ { j = 1 p | t j j | q j } = j = 1 p ( t j j 2 ) γ + q 2 j 2 , tr ( T T ) = j = 1 p t j j 2 + i > j t i j 2
where in i > j t i j 2 there are p ( p 1 ) / 2 terms. Consider a polar coordinate transformation on all these p ( p + 1 ) / 2 terms t i j ’s. { t 11 , t 22 , . . . , t p p , t 21 , . . . , t p p 1 } { r , θ 1 , . . . , θ k 1 } , k = p ( p + 1 ) / 2 . Then, the Jacobian element is already discussed in the proof of Theorem 2.1. r has exponent k 1 in the Jacobian element. Then, collecting all factors containing r in the transformed f 3 ( X ) we have
( r 2 ) p ( γ + q 2 ) 1 2 [ 1 a ( r 2 ) δ ] β 1
and the integral over r gives the following:
0 ( r 2 ) p ( γ + q 2 ) 1 2 ( 1 a ( r 2 ) δ ) β 1 d r = 1 2 δ a p δ ( γ + q 2 ) Γ ( p δ ( γ + q 2 ) ) Γ ( β ) Γ ( β + p δ ( γ + q 2 ) )
for ( β ) > 0 , a > 0 , δ > 0 , ( γ ) > q 2 . The integral over all the sine and cosine product is available from the proof of Theorem 2.1, which is 2 Γ p ( γ + q 2 ) / Γ ( p ( γ + q 2 ) ) for ( γ ) > q 2 + p 1 2 . Taking the product with that in ( i i i ) establishes the theorem. In the complex case the density and the normalizing constant are the following:
f 3 c ( X ˜ = c ˜ 3 | det ( X ˜ X ˜ * ) | γ [ 1 a [ tr ( X ˜ X ˜ * ) ] δ ] β 1
for X ˜ a p × q , p q matrix of rank p in the complex domain with distinct p q complex scalar variables as elements such that a [ tr ( X ˜ X ˜ * ) ] δ < 1 , a > 0 , δ > 0 , ( γ ) > q + p 1 and f 3 c ( X ˜ ) = 0 elsewhere. Then, the normalizing constant c ˜ 3 is available from the following theorem.
Theorem 4c.3
For X ˜ , p , q , δ , a , γ as defined in (4c.4) and following through the derivation parallel to that in the real case, the normalizing constant c ˜ 3 is the following:
c ˜ 3 = Γ ˜ p ( q ) δ a p δ ( γ + q ) Γ ( β + p δ ( γ + q ) ) Γ ( p ( γ + q ) ) π p q Γ ( β ) Γ ( p δ ( γ + q ) ) Γ ˜ p ( γ + q )
for ( γ ) > q + p 1 , a > 0 , δ > 0 , ( β ) > 0 , p q .
As explained in Note 4.1, arbitrary moments and exact density of | X X | or its general form | A 1 2 ( X M ) B ( X M ) A 1 2 | in the real case are available from the normalizing constant c 3 . Corresponding comment holds in the complex case also.
Note that here also a more general model in the real case is available by replacing X X by A 1 2 ( X M ) B ( X M ) A 1 2 as mentioned before. The only change will be that the normalizing constant will be multiplied by | A | q 2 | B | p 2 . Hence, this general case is not listed here separately. In the complex case, the multiplicative factor is | det ( A ) | q | det ( B ) | p . A more general case is available by introducing another factor containing a trace also into f 3 ( X ) . Consider the following model in the real case:
f 4 ( X ) = c 4 | X X | γ [ tr ( X X ) ] η [ 1 a [ tr ( X X ) ] δ ] β 1 , a > 0 , δ > 0 , a [ tr ( X X ) ] δ < 1
for ( β ) > 0 , ( η ) > 0 , ( γ ) > q 2 + p 1 2 , X is p × q , p q and of rank p, a > 0 , a [ tr ( X X ) ] δ < 1 , δ > 0 and f 4 ( X ) = 0 outside this sphere. Proceeding exactly as in the proof of Theorem 4.3, we have the following result:
Theorem 4.4. 
Let X , p , q , δ , η , a , γ be as defined in (4.5) and ( η ) > 0 . Then, the normalizing constant c 4 is given by
c 4 = Γ p ( q 2 ) π p q 2 δ a 1 δ ( p ( γ + q 2 ) + η ) Γ ( β + 1 δ ( p ( γ + q 2 ) + η ) ) Γ ( p ( γ + q 2 ) ) Γ ( β ) Γ ( 1 δ ( p ( γ + q 2 ) + η ) ) Γ p ( γ + q 2 ) .
Model corresponding to the one in (4.5) in the complex case, is the following, where X ˜ is a p × q , p q matrix in the complex domain of rank p with p q distinct scalar complex variables as elements such that a [ tr ( X ˜ X ˜ * ) ] δ < 1 , a > 0 , δ > 0 , the parameters are such that ( γ ) > q + p 1 , ( η ) > 0 , ( β ) > 0 :
f 4 c ( X ˜ ) = c ˜ 4 | det ( X ˜ X ˜ * ) | γ [ tr ( X ˜ X ˜ * ) ] η [ 1 a [ tr ( X ˜ X ˜ * ) ] δ ] β 1
and f 4 c ( X ˜ ) = 0 elsewhere. By using steps parallel to those in the derivation of the normalizing constant c 4 in the real case, one can see that the normalizing constant in the complex case is given in the following theorem:
Theorem 4c.4
For X ˜ , a , δ , η , δ , γ as defined in (4c.5), the normalizing constant c ˜ 4 is the following:
c ˜ 4 = Γ ˜ p ( q ) δ a 1 δ ( p ( γ + q ) + η ) Γ ( β + 1 δ ( p ( γ + q ) + η ) ) Γ ( p ( γ + q ) ) π p q Γ ( β ) Γ ( 1 δ ( p ( γ + q ) + η ) ) Γ ˜ p ( γ + q ) .
Again, a more general model in the real case is available by replacing X X by A 1 2 ( X M ) B ( X M ) A 1 2 , A > O , B > O , M = E [ X ] . Also, from the structure of f 4 ( X ) it is clear that the exact density and arbitrary moments of the determinant | X X | or | A 1 2 ( X M ) B ( X M ) A 1 2 | is available by replacing the parameter γ by γ + h and then taking the ratio of the normalizing constant c 4 as explained before. Similarly, the exact density or arbitrary moments of [ tr ( X X ) ] or its general form is available by replacing η by η + h and taking the ratio of the normalizing constant c 4 . Similar comments hold in the complex domain also, the multiplicative factor will be | det ( A ) | q | det ( B ) | p in the complex case.
Now, we will consider an exponentiated type-1 beta type model. Again, let X be p × q , p q matrix of rank p with p q distinct real scalar variables as elements. Consider the model
f 5 ( X ) = c 5 | X X | γ [ tr ( X X ) ] η e α [ tr ( X X ) ] δ [ 1 a e [ tr ( X X ) ] δ ] β 1
for a > 0 , a [ tr ( X X ) ] δ < 1 , δ > 0 , ( β ) > 0 , ( α ) > 0 , ( η ) > 0 , ( γ ) > q 2 + p 1 2 and f 5 ( X ) = 0 outside the sphere. Then, we have the following result:
Theorem 4.5. 
Let X and the parameters be as defined in f 5 ( X ) . Then, the normalizing constant c 5 is given by the following:
c 5 = Γ p ( q 2 ) δ Γ ( p ( γ + q 2 ) ) π p q 2 Γ ( 1 δ ( p ( γ + q 2 ) + η ) ) Γ p ( γ + q 2 ) × [ ζ [ { ( ( 1 δ ( p ( γ + q 2 ) + η ) ) , α ) } : 1 β ; ; a ] ] 1
for δ > 0 , a > 0 , ( α ) > 0 , ( β ) > 0 , ( η ) > 0 , ( γ ) > q 2 + p 1 2 , where ζ ( · ) is the extended zeta function defined in Theorem 3.5.
The proof is straightforward. Since 0 < a [ tr ( X X ) ] δ < 1 we can use a binomial expansion and write
[ 1 a [ tr ( X X ) ] δ ] β 1 = k = 0 ( 1 β ) k a k k ! e k [ tr ( X X ) ] δ
Now, the exponential trace part joins with the exponential trace part remaining in f 5 ( X ) becoming e ( α + k ) [ tr ( X X ) ] δ . Then, one can integrate out by using Theorem 4.4 by replacing α there by α + k and interpreting the result in terms of an extended zeta function, Theorem 4.5 is established. The corresponding model in the complex domain is the following:
f 5 c ( X ˜ = c ˜ 5 | det ( X ˜ X ˜ * ) | γ [ tr ( X ˜ X ˜ * ) ] η e α [ tr ( X ˜ X ˜ * ) ] δ [ 1 a e [ tr ( X ˜ X ˜ * ) ] δ ] β 1
for X ˜ a p × q , p q and of rank p matrix in the complex domain with p q distinct scalar complex variables as elements, a > 0 , δ > 0 , a [ e [ tr ( X ˜ X ˜ * ) ] δ < 1 , ( η ) > 0 , ( β ) > 0 , ( γ ) > q + p 1 and f 5 c ( X ˜ ) = 0 elsewhere. Then, following through the derivation parallel to that in the real case, the normalizing constant c ˜ 5 is the following:
Theorem 4c.5
Under the conditions stated in (4c.6),
c ˜ 5 = Γ ˜ p ( q ) δ Γ ( p ( γ + q ) ) π p q Γ ( 1 δ ( p ( γ + q ) + η ) ) Γ ˜ p ( γ + q ) × [ ζ [ { ( ( 1 δ ( p ( γ + q ) + η ) ) , α ) } : 1 β ; ; a ] ] 1 .

5. Concluding Remarks

Special cases of all the normalizing constants reported in Sections 2,3,4, namely for the cases δ = 1 and the exponent of the trace factor η = 0 , are available in the recent book [16]. In the real case, models in Theorems 3.1 and 4.1 for the exponent of the gamma factor γ 0 and and the exponent of the trace factor η = 0 and the corresponding multivariate gamma distributions, are connected to geometrical probability problems, see [15]. The theory of geometrical probabilities in the complex domain is not yet developed. When such a theory is developed, all the results in this paper will be applicable there. Various models are defined in this paper by evaluating the corresponding normalizing constants. In order to limit the size of the paper, we did not delve into some properties of these models.
For further work, one can study the properties of the various models introduced here. In the light of [17], one can look into Bayesian models connected with the various models here. One can explore the distributions of quantities such as trace or determinant connected with the models in this paper.
The author received no external funding for this research. Author declares no conflict of interest.

References

  1. Mathai, A.M. A pathway to matrix-variate gamma and normal densities. Linear Algebra and its Applications 2005, 396, 317–328. [Google Scholar] [CrossRef]
  2. Xinping Deng. Texture Analysis and Physical Interpretation of Polarimetric SAR Data. Ph.D Thesis, Universitat Politecnica de Catalunya, Barcelona, Spain, 2016. [Google Scholar]
  3. Bombrun, L.; Beaulieu, J.-M. Fisher distribution for texture modeling of Polarimetric SAR data. IEEE Geoscience and Remote Sensing Letters 2008, 5, 512–516. [Google Scholar] [CrossRef]
  4. Frery, A.C.; Muller, H.J.; Yanasse, C.C.F.; Sant’Anna, S.J.S. A model for extremely heterogeneous clutter. IEEE Transactions on Geoscience and Remote Sensing 1997, 35, 648–659. [Google Scholar] [CrossRef]
  5. Jakeman, E.; Pusey, P. Significance of K distributions in scatering experiments. Physical Review Letters 1978, 40, 546–550. [Google Scholar] [CrossRef]
  6. Yueh, S.H.; Kong, J.A.; Jao, J.K.; Shin, R.T.; Novak, L.M. K-distribution and Polarimetric terrain radar clutter. Journal of Electromagnetic Wave and Applications 1989, 3, 747–768. [Google Scholar] [CrossRef]
  7. José A. Díaz-Garcia and Ramón Gutiérrez-Jáimez. Compound and scale mixture of matricvariate and matrix variate Kotz-type distributions. Journal of the Korean Statistical Society 2010, 39, 75–82. [Google Scholar]
  8. Paul R. Kersten, Stian N. Anfinsen and Anthony P. Doulgeris. The Wishart-Kotz classifier for multilook polarimetric SAR data, 2012, IEEE Xplore, 978-1-1159-5/12/$31.00@2012 IEEE.
  9. Samuel Kotz and Saralees Nadarajah. Some extremal type elliptical distributions. Statistics & Probability Letters 2001, 54, 171–182. [Google Scholar]
  10. Saralees Nadarajah. The Kotz-type distribution with applications. Statistics 2003, 37, 341–358. [Google Scholar] [CrossRef]
  11. Amadou Sarr and Arjun, K. Gupta, Estimation of the precision matrix of multivariate Kotz type model. Journal of Multivariate Analysis 2009, 100, 742–752. [Google Scholar]
  12. Mathai, A.M. Jacobians of Matrix Transformations and Functions of Matrix Argument; World Scientific Publishing: New York, 1997. [Google Scholar]
  13. Mathai, A.M. An extended zeta function with Applications in Model Building and Bayesian Analysis. preprint, 2023.
  14. Mathai, A.M.; Provost, S.B. On q-logistic and related distributions. IEEE Transactions on Reliability 2006, 55, 237–244. [Google Scholar] [CrossRef]
  15. Mathai, A.M. An Introduction to Geometrical Probability: Distributional Aspects with Applications; Gordon and Breach Science Publishers: Amsterdam, 1999. [Google Scholar]
  16. Arak, M. Mathai, Serge B. Provost and Hans J. Haubold, Multivariate Statistical Analysis in the Real and Complex Domains; Springer Nature: Switzerland, 2022. [Google Scholar]
  17. Alessio Benavoli, Alessandro Facchini and Marco Zaffalon, Quantum mechanics: The Bayesian theory generalised to the space of Hermitian matrices. 2016 arXiv:1605. 08177v4[quant-ph] 23 Sep 2016. arXiv:1605. 08177v4[quant-ph] 23 Sep 2016.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated