Preprint
Article

Better Estimate of Four Moments Theorems and Its Application to Polynomials

Altmetrics

Downloads

102

Views

13

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

01 October 2023

Posted:

03 October 2023

You are already at the latest version

Alerts
Abstract
We introduce the lower chaos grade of a real-valued function F defined on the Markov triple (E,μ,Γ), where μ is a probability measure and Γ is the carré du champ operator. As an application of this concept, we obtain the better estimate of the four moments theorem for Markov diffusion generators worked by Bourguin et al (2019). For our purpose, we need to find the largest number except zero in the set of eigenvalues corresponding to its eigenfunction in the case where the square of a random variable F, coming from a Markov triple structure, can be expressed as a sum of eigenfunctions, We give some examples of eigenfunctions of the diffusion generators such as Ornstein-Uhlenbeck, Jacobi and Romanovski-Routh. In particular, two bounds, called the four moments theorem and fourth moment theorem respectively, will be provided for the normal approximation of the case where a random variable F comes from eigenfunctions of a Jacobi generator.
Keywords: 
Subject: Computer Science and Mathematics  -   Probability and Statistics

1. Introduction

The aim of this paper is to find the better estiamte of the four moments theorems of a random variable belonging to Markov chaos studied by Bourguin et al. in the paper [3]. The first study in this field is the central limit theorem, called the fourth moment theorem, in [18] studied by Nualart and Peccati. These authors found a necessary and sufficient condition such that a sequence of random variables, belonging to a fixed Wiener chaos, converges in distribution to a Gaussian random variable. More precisely, let ( X = { X ( h ) , h H } ) be an isonormal Gaussian process defined on a probability space ( Ω , F , P ) , where H is a real separable Hilbert space.
Theorem 1. 
[Fourth moment theorem]Fix an integer q 2 , and let { F n , n 1 } be a sequence of random variables belonging to the qth Wiener chaos with E [ F n 2 ] = 1 for all n 1 . Then F n L Z if and only if E [ F n 4 ] 3 , where Z is a standard Gaussian random variable and the notation L denotes the convergence in distribution.
Such a result gives a dramatic simplication of the method of moments from the point of view of convergence in distribution. The above fourth moment theorem is expressed in terms of Malliavin derivative in [17]. However, the results given in [17,18] do not provide any information about the rate of convergenc, whereas, in the paper [10], the authors prove that Theorem 1 can be recovered from the estimate of the Kolmogorov (or total variation, Wasserstein) distance obtained by using the techniques based on the combination between Malliavin calculus (see, e.g., [13,15,16]) and Stein’s method for normal approximation (see, e.g., [4,20,21]). For more explanation of these techniques, we refer to the papers [6,9,10,11,12,13,14].
One of the remarkable achievements of Nourdin-Peccati approach (see Theorem 3.1 in [10]) is the quantification of fourth moment theorem for functionals of Gaussain fields. In the particular case where F is an element in the qth Wiener chaos of X with E [ F 2 ] = 1 , the upper bound of Kolmogorov distance is given by
d K o l ( F , Z ) q 1 3 q E [ F 4 ] 3 .
Here E [ F 4 ] 3 is just the fourth cumulant κ 4 ( F ) of F.
Recently, the author in [8] proves that the fourth moment theorem also holds in the general framework of Markov diffusion generators. More precisely, under a certain spectral condition on Markov diffusion generator, a sequence of eigenfunctions of such a generator satisfies the bound given in (1). In particular, this new method may avoid the use of complicated product formula of multiple integrals. After this work, the authors in [1] introduce a Markov choas of eigenfunctions being less restrictive than Markov chaos defined in [8]. Using this Markov chaos, they derive the quantitative four moments theorem for convergence of the eigenfuctions towards Gaussian, Gamma, Beta distributions. Furthermore, the authors in [3] that the convergence of the elements of a Markov chaos to a Pearson distribution can be still bounded with just the first four moments by using the new concept of chaos grade.
For the purposes of this paper, we will start by referring to the estimate given in Theorem 3.9 obtained by Bourguin et al. in [3]. Pearson diffusions are Itô diffusion given by the following stochastic differential equation(sde)
d X t = a ( X t ) d t + 2 θ b ( X t ) d B t ,
where a ( x ) = θ ( x m ) and b ( x ) = b 2 x 2 + b 1 x + b 0 . Given the generator L defined on L 2 ( E , μ ) by
L f ( x ) = θ ( x m ) f ( x ) + θ b ( x ) f ( x ) ,
its invariant measure μ is a Pearson distribution and the set of eigenvalue of L is given by
Λ = n ( 1 ( n 1 ) b 2 ) θ : n N 0 , b 2 < 1 2 n 1 .
Theorem 2. 
(Bourguin et al. (2019)) Let ν be a Pearson distribution associated to the diffusion given by sde (2). Let F be a chaotic eigenfunction of generator L with eigenvalue λ , chaos grade and moments up to 4. Set G = F + m and ξ = u 2 ( 1 b 2 ) . Then, it holds
E Γ ( G , L 1 G ) b ( G ) 2 2 1 b 2 u 4 E [ U ( G ) ] + ( ξ ) + ( 1 b 2 ) 2 E [ Q 2 ( G ) ] ,
where ( ξ ) + = ξ for ξ > 0 and 0 for ξ 0 , and the polynomials Q and U are given by
Q ( x ) = x 2 + 2 ( b 1 + m ) 2 b 2 1 x + 1 b 2 1 b 0 + m ( b 1 + m ) 2 b 2 1 ,
U ( x ) = ( 1 b 2 ) Q 2 ( x ) 1 12 ( Q ( x ) ) 3 ( x m ) .
The notations Γ and L 1 in the above theorem, related to Markov generator, are explained in Section 2.
In this paper, we improve the estimate given in Theorem 2 by introducing the notion of the lower chaos grade in the set of eigenvalues of generator L. For example, if the target distribution ν in Theorem 2 is a standard Gaussian measure, then the diffusion coefficients are given as b 2 = b 1 = 0 and b 0 = 1 . Since a chaotic random variable F = I q ( f ) , f H q with E [ F 2 ] = 1 , has the chaos grade u = 2 , the second term in the bound (5) is vanished and the bound is given as follows:
E Γ ( F , L 1 F ) 1 2 1 3 E [ F 4 ] 3 .
Note that d K o l ( F , Z ) E [ ( Γ ( F , L 1 F ) 1 ) 2 ] . Hence the bound in (1) provides a better estimate for four moments theorem in comparison with bound of (8) in the case of F = I q ( f ) . In this paper, we will develop a new technique that provides more improved bounds as above.
Also we give two bounds, called the four moments theorem and fourth moment theorem respectively, for the normal approximation of the case where a random variable F comes from eigenfunctions of a Jacobi generator. One of the bounds is from our main result, Theorem 3 below, and the other bound, obtained using the result in [7]. shows that the fourth moment theorem holds even if the upper chaos grade is greater than two.
The rest of the paper is organized as follows: Section 2 reviews some basic notations and results of Markov diffusion generator. Our main result, in particular the bound in Theorem 3, is presented in Section 3, Finally, as an application of our main results, in Section 4, we consider the case where a random variable G in Theorem 2 comes from an eiegnfunction of a generator associated to a Pearson distribution.

2. Preliminaries

In this section, we recall some basic facts about Markov diffusion generator. The reader is referred to [2] for a more detailed explanation. We begin by the definition of Markov triple ( E , F , μ ) in the sense of [2]. For the infinitesimal generator L of a Markov semigroup P = ( P t ) t 0 with L 2 ( μ ) -domain D ( L ) , we associated a bilinear form Γ . Assume that we are given a vector space A 0 of D ( L ) such that for every ( F , G ) of random variables defined on a probability space ( E , F , μ ) , the product F G is in D ( L ) ( A 0 is an algebra). On this algebra A 0 , the bilinear map (carré du champ operator) Γ is defined
Γ ( F , G ) = 1 2 ( L ( F G ) F L G G L F ) .
for every ( F , G ) A 0 × A 0 . As the carré du champ operator Γ and the measure μ completely determine the symmetric Markov generator L, we will work throughout this paper with Markov triple ( E , F , μ ) equipped with a probability measure μ on a state space ( E , F ) and a symmetric bilinear map Γ : A 0 × A 0 such that Γ ( F , F ) 0 .
Next, we construct domain D ( E ) of the Dirichlet form E by completion of A 0 , and then obtain, from this Dirchlet domain, domain D ( L ) of L. Recall the Dirchlet form E as
E ( F , G ) = E [ Γ ( F , G ) ] f o r ( F , G ) A 0 × A 0 .
If A 0 is endowed with the norm
F E = F 2 2 + E ( F , F ) 1 / 2 ,
the completion of A 0 with respect to this norm turns it into a Hilbert space embedded in L 2 ( μ ) . Once the Dirchlet domian D ( E ) is contructed, the domaion D ( L ) D ( E ) is defined as all elements F D ( E ) such that
| E ( F , G ) | c F E [ G 2 ]
for all G D ( E ) , where c F is a finite constant only depending on F. On these domains, a relation of L and Γ holds, namely the integration by parts formula
E [ Γ ( F , G ) ] = E [ F L G ] = E [ G L F ] .
By the integration by parts Formula (11) and Γ ( F , F ) 0 , the operator L is nonnegative and symmetric, and therefore the spectrum of L is contained S [ 0 , ) We assume that L has discrete spectrum S = { λ k , k 0 } . Obviously, the zero is always an eigenfunction such that L ( 1 ) = 0 .
A Full Markov triple is a Standard Markov triple for which there is an extended algebra A 0 A , with no requirement of integrability for elements of A , satisfying the requirements given in Section 3.4.3 of [2]. In particular, the diffusion property holds: for any C function Ψ ; R k R , and F 1 , , F k , G A ,
Γ ( Ψ ( F 1 , F k ) , G ) = i = 1 k i Ψ ( F 1 , F k ) Γ ( F i , G ) ,
and
L ( Ψ ( F 1 , F k ) = i = 1 k i Ψ ( F 1 , F k ) L F i + i , j = 1 k i j Ψ ( F 1 , F k ) Γ ( F i , F j ) .
We also define the operator L 1 , called the pseudo-inverse of L, satisfying for any F D ( L ) ,
L L 1 F = L 1 L F = F E [ F ] .
Obviously, this pseudo-inverse  L 1 is naturally constructed and defined on D ( L ) by a self-adjointness of the operator L.

3. Main Results

We denote the set of eigenvalues of the generator L by Λ ( , 0 ] . Then chaotic random variables are defined as follows:
Definition 1. 
An eigenfunction F with respect to an eigenvalue λ of the generator L is calledchaoticif there exists u > 1 and e 1 such that u λ and e λ are eigenvalues of L, and
F 2 κ Λ { 0 } e λ κ u λ Ker ( L + κ I d ) K e r ( L ) .
The smallest u satisfying (15) is called the upper chaos grade of F, and the largest e satisfying (15) is called the lower chaos grade of F.
Now we improve the estimate given in Theorem 2 described in the introduction.
Theorem 3. 
Let ν be a Pearson distribution associated to the diffusion given by sde (2). Let F be a chaotic eigenfunction of generator L with eigenvalue λ , chaos grade u and moments up to 4. Set G = F + m . Then, we have
E Γ ( G , L 1 G ) b ( G ) 2 2 1 b 2 u + e 4 1 3 b 2 E [ U ˜ ( G ) ] + 1 b 2 2 e 4 u 2 ( 1 b 2 ) E [ Q 2 ( G ) ] ,
where Q is given by (6), and
U ˜ ( x ) = x 4 + 3 ( 1 b 2 ) 1 3 b 2 ( Q 2 ( x ) x 4 ) 1 4 ( 1 3 b 2 ) ( Q ( x ) ) 3 ) ( x m ) 8 x 4 .
Proof: From the proof of Theorem 3.9 in [3], we write
Γ ( G , L 1 G ) b ( G ) = 1 2 λ L + 2 ( 1 b 2 ) λ I d ( Q ( G ) ) ,
where Q ( x ) is a quadratic polynomial given by (6). By the assumption,
Q ( G ) = κ Λ κ u λ J κ Q ( G ) .
Direct computations yield, together with (18), that
E ( Γ ( G , L 1 G ) b ( G ) ) 2 = 1 4 λ 2 E L Q ( G ) + 2 ( 1 b 2 ) λ Q ( G ) L + 2 ( 1 b 2 ) λ Q ( G ) = 1 4 λ 2 { E L Q ( G ) L + 2 ( 1 b 2 ) λ ( Q ( G ) ) + 2 ( 1 b 2 ) λ E Q ( G ) L + 2 ( 1 b 2 ) λ Q ( G ) } = 1 4 λ 2 { κ Λ κ u λ ( κ ) ( 2 ( 1 b 2 ) λ κ ) E J κ ( Q ( G ) ) 2
+ 2 ( 1 b 2 ) λ κ Λ κ u λ ( 2 ( 1 b 2 ) λ κ ) E J κ ( Q ( G ) ) 2 } = 1 4 λ 2 { κ Λ κ u λ ( κ ) ( u λ κ ) E J κ ( Q ( G ) ) 2 + ( 2 ( 1 b 2 ) u ) λ κ Λ κ u λ ( κ ) E J κ ( Q ( G ) ) 2 + 2 ( 1 b 2 ) λ κ Λ κ u λ ( u λ κ ) E J κ ( Q ( G ) ) 2 + 2 ( 1 b 2 ) ( 2 ( 1 b 2 ) u ) λ 2 κ Λ κ u λ E J κ ( Q ( G ) ) 2 } = 1 4 λ 2 { κ Λ κ u λ ( κ ) ( u λ κ ) E J κ ( Q ( G ) ) 2 + ( 4 ( 1 b 2 ) u ) λ κ Λ κ u λ ( u λ κ ) E J κ ( Q ( G ) ) 2 ( 2 ( 1 b 2 ) u ) u λ 2 κ Λ κ u λ E J κ ( Q ( G ) ) 2 }
+ 2 ( 1 b 2 ) ( 2 ( 1 b 2 ) u ) λ 2 κ Λ κ u λ E J κ ( Q ( G ) ) 2 } .
Since κ e λ for all κ Λ { 0 } , we have that
κ Λ κ u λ ( κ ) ( u λ κ ) E J κ ( Q ( G ) ) 2 = κ Λ { 0 } κ u λ ( κ ) ( u λ κ ) E J κ ( Q ( G ) ) 2 e λ κ Λ κ u λ ( u λ κ ) E J κ ( Q ( G ) ) 2 .
Using (21) yields that
E ( Γ ( G , L 1 G ) b ( G ) ) 2 1 4 λ 2 { 4 λ ( 1 b 2 ) u + e 4 κ Λ κ u λ ( u λ κ ) E J κ ( Q ( G ) ) 2 ( 2 ( 1 b 2 ) u ) u λ 2 κ Λ κ u λ E J κ ( Q ( G ) ) 2 + 2 ( 1 b 2 ) ( 2 ( 1 b 2 ) u ) λ 2 κ Λ κ u λ E J κ ( Q ( G ) ) 2 } = 1 4 λ 2 { 4 λ ( 1 b 2 ) u + e 4 E [ Q ( G ) ( L + u λ I d ) Q ( G ) ] + ( u 2 ( 1 b 2 ) ) u λ 2 E [ Q 2 ( G ) ] 2 ( 1 b 2 ) ( u 2 ( 1 b 2 ) ) λ 2 E [ Q 2 ( G ) ] } = 1 4 λ 2 { 4 λ ( 1 b 2 ) u + e 4 E [ Q ( G ) ( L + 2 ( 1 b 2 ) λ I d ) Q ( G ) ] + 4 1 b 2 2 e 4 ( u 2 ( 1 b 2 ) ) λ 2 E [ Q 2 ( G ) ] } = 2 1 b 2 u + e 4 1 3 b 2 E [ U ˜ ( G ) ] + 1 b 2 2 e 4 ( u 2 ( 1 b 2 ) ) E [ Q 2 ( G ) ] .
Here, for the last equality in (22), we use the following equality obtained from the proof of Theorem 3.9 in [3],
E [ Q ( G ) ( L + u λ I d ) Q ( G ) ] = E [ Q ( G ) ( L + 2 ( 1 b 2 ) λ I d ) Q ( G ) ] + ( u 2 ( 1 b 2 ) ) λ E [ Q 2 ( G ) ] = 2 λ E [ U ( G ) ] + ( u 2 ( 1 b 2 ) ) λ E [ Q 2 ( G ) ] .

4. Application to Three Polynomials

In this section, three examples will be given in order to illustrate the estimate (16) with the explict expression. For this, we consider the case where a random variable F in Theorem 3 comes from eigenfunctions of a generator associated to a Pearson distribution. For simplicity, we only consider one-dimensional case, analogus results in finite or infinite dimensional case can be extended in a similar way.

4.1. Ornstein-Uhlenbeck Generator

We consider the one dimensional Ornstein-Uhlenbeck generator L, defined for any test function f by
L f ( x ) = f ( x ) x f ( x ) f o r x R ,
action on L 2 ( R , μ ) , where
μ ( d x ) = 1 2 π e x 2 2 d x .
Let us set F = H q ( x ) where H q denotes the Hermite polynomial of order q. Then we have that F K e r ( L + q I d ) .
Corollary 1. 
Let ν be a Gaussain distribution associated with the diffusion given by (2) with mean m and b 0 = σ 2 . If F = H q ( x ) , q 2 , and G = F + m , then we have
E Γ ( G , L 1 G ) σ 2 2 q 1 3 q E [ F 4 ] 6 σ 2 E [ F 2 ] + 3 σ 4 ,
Proof. 
By the well-known product formula, the square of F can be expressed as a linear combination of Hermite polynomials up to order 2 q such as
H q 2 ( x ) = r = 0 q r ! q r 2 H 2 ( q r ) ( x ) .
This product formula (24) gives that the upper chaos grade and lower chaos grade of H q are u = 2 and e = 2 q 1 for q 2 . Hence Theorem 3 yields that
E Γ ( G , L 1 G ) σ 2 2 2 3 1 2 + 2 q 4 E [ U ˜ ( G ) ] q 1 3 q E [ U ˜ ( G ) ] .
When b 2 = b 1 = 0 and b 0 = σ 2 , a directed computation yields that
U ( x ) = 1 3 ( x m ) 4 6 σ 2 ( x m ) 2 + 3 σ 4 ,
so that
E [ U ˜ ( G ) ] = E [ F 4 ] 6 σ 2 E [ F 2 ] + 3 σ 4 ,
From (25) and (26), the proof of the result (23) is completed. □
Remark 1. 
When L is the infinite dimensional Ornstein-Uhlenbeck generator, then L I q ( f ) = q I q ( f ) , q = 0 , 1 , , . Hence the spectrum of L consists of zero and the negative integers with the eigenfunctions being represented by mutiple stochastic integrals. The product formula of the multiple stochastic integrals gives that
I q ( f ) 2 = r = 0 q r ! q r 2 I 2 q 2 r ( f r f ) .
This formula shows that theupper chaos gradeandlower chaos gradeof I q ( f ) are still given by u = 2 and e = 2 q as the one-dimensional case. The upper bound in (1) can be obtained from Theorem 3. □

4.2. Jacobi Generator

We consider the one-dimensional Jacobi generator L α , β defined on L 2 ( [ 0 , 1 ] , μ α , β ) by
L α , β f ( x ) = ( α ( α + β ) x ) f ( x ) + x ( 1 x ) f ( x ) ,
where
μ α , β ( d x ) = Γ ( α + β ) Γ ( α ) Γ ( β ) x α 1 ( 1 x ) β 1 1 [ 0 , 1 ] d x .
Its spectrum Λ is of the form
Λ = n ( n + α + β 1 ) : n N 0 .
Set λ n = n ( n + α + β 1 ) , n = 0 , 1 , . Then, we have that
L 2 ( [ 0 , 1 ] , μ α , β ) = n = 0 Ker ( L α , β + λ n I d ) ,
and the kernels are given by
Ker ( L α , β + λ n I d ) = a P n ( α 1 , β 1 ) ( 1 2 x ) ; a R ,
where P n ( α , β ) ( x ) denotes the nth Jacobi polynomials
P n ( α , β ) ( x ) = ( 1 ) n 2 n n ! ( 1 x ) α ( 1 + x ) β d n d x n ( ( 1 x ) α + n ( 1 + x ) β + n ) .
Recall that p F q denotes the generalized hypergeometric function with p numerator and q denominator, given by
p F q ( a p ) ( b q ) | x = k = 0 ( a 1 ) k ( a 2 ) k ( a p ) k ( b 1 ) k ( b 2 ) k ( b p ) k x k k ! ,
where the notation ( a p ) denotes the array of p parameter a 1 , , a p and
( α ) n = Γ ( α + n ) Γ ( α ) .
Then Jacobi polynomials are given by
P n ( α , β ) ( x ) = ( α + 1 ) n n ! 2 F 1 n , α + β + n + 1 α + 1 | 1 x 2 .

4.2.1. Beta Approximation

In this section, we consider the case when the target distribution ν is a Beta distribution.
Corollary 2. 
Let ν be the Beta distribution associated to the the diffusion given by (2) with mean ,
m = α α + β , b 2 = 1 α + β , b 1 = 1 α + β a n d b 0 = 0 .
Let F = P n ( α 1 , β 1 ) ( 1 2 x ) , n 2 , and set
G = F + α α + β f o r α , β > 0 .
Then we have
E Γ ( G , L 1 G ) 1 α + β G ( 1 G ) 2 2 3 1 + 1 α + β 2 n ( 2 n + α + β 1 ) + α + β 4 n ( n + α + β 1 ) α + β + 3 α + β E [ U ˜ ( G ) ] + α + β + 1 2 ( α + β ) α + β 4 n ( n + α + β 1 ) × 2 ( 2 n + α + β 1 ) ( n + α + β 1 ) 2 ( α + β + 1 ) α + β E [ Q 2 ( G ) ] ,
where the constants b 2 , b 1 , b 0 and m in U ˜ ( x ) and Q ( x ) are given by (31).
Proof. 
The square of a Jacobi polynomial P n ( α , β ) ( x ) can be expressed as a linear combination of Jacobi polynomials up to order 2 n as follows:
P n ( α 1 , β 1 ) ( 1 2 x ) 2 = k = 0 2 n c n , k P k ( α 1 , β 1 ) ( 1 2 x ) ,
where the linearization coefficients c n , k are explicitly given in the paper [5]. This product Formula (33) shows that the upper chaos grade  u and the lower chaos grade e of P n α 1 , β 1 are given by
u = λ 2 n λ n = 2 ( 2 n + α + β 1 ) ( n + α + β 1 ) ,
e = λ 1 λ n = α + β n ( n + α + β 1 ) .
Hence from (34) and (35) together with b 2 = 1 α + β , the upper bound (32) follows. □

4.2.2. Normal Approximation

In this section, we consider the case when the target distribution ν is a standard Gaussian measure. Then the diffusion coefficients are given as b 2 = b 1 = 0 and b 0 = 1 . For simplicity, we will deal with the second Jacobi polynomials P 2 α 1 , α 1 ( 1 2 x ) for α > 0 , defined on L 2 ( [ 0 , 1 ] , μ α , α ) , for the case n=2 and α = β in (30). Let us set
F = P 2 α 1 , α 1 ( 1 2 x ) P 2 α 1 , α 1 ( 1 2 · ) L 2 ( [ 0 , 1 ] , μ α , α ) .
Then it is obvious that F has E [ F ] = 0 and E [ F 2 ] = 1 . From (34) and (35), it follows that
u = λ 4 λ 2 = 2 ( 2 α + 3 ) ( 2 α + 1 ) ,
e = λ 1 λ 2 = α ( 2 α + 1 ) .
This implies that the upper chaos grade has u > 2 and the lower chaos grade e < 1 . By Theorem 3, the bound is given as follows:
E Γ ( F , L 1 F ) 1 2 2 3 1 u + e 4 E [ F 4 ] 3 + ( 2 e ) ( u 2 ) 4 V a r ( F 2 ) .
Even when the fourth cumulant of F in the first term of (39) is 0, we may not be able to guarantee that F has a standard Gaussian distribution because of the second term in (39). This shows that the fourth moment theorem of Theorem 1 may not hold.
To overcome this problem, a new techique, in [7], has been developed to show that the fourth moment theorem (Theorem 4 below) holds even though the chaos grade is greater than two. Let F be a chaotic eigenfunction of L with respect to λ with E [ F ] = 0 and E [ F 2 ] = 1 . We define a linear function ϕ ( x ) = m x + b , where
m = 1 4 λ e λ κ u λ ( 2 λ κ ) E J κ ( F 2 ) 2 , b = 1 4 λ 2 e λ κ u λ κ ( 2 λ κ ) E J κ ( F 2 ) 2 .
Here J κ ( F 2 ) denotes the projection of F 2 on Ker ( L + κ I d ) .
Theorem 4. 
If m 0 , then we have
E [ ( Γ ( F , L 1 F ) 1 ) 2 ] = 2 c m , b 6 E [ F 4 ] 3 ,
where c m , b is a constsnt such that ϕ ( c m , b ) = 0 .
Proof. 
Using the argument in the proof of Theorem 3 in [7] shows that , for any x R ,
E [ ( Γ ( F , L 1 F ) 1 ) 2 ] = 2 m x m + ϕ ( x ) .
Since m 0 , there exists a constant c m , b , depending on m and b, such that ϕ ( c m , b ) = 0 . Also the proof of Theorem 3 in [7] shows that
m = 1 6 E [ F 4 ] 3 E [ F 2 ] 2 .
Plugging c m , b into x in (41) yields, together with (42), that (4) holds. □
We will use Theorem 4 to find that, given F as (36), under which conditions the fourth moment theorem is working by removing the second term in (39). Define a linear function ϕ ( x ) = m x + b , where the slope m and the intercept b are
m = 16 α 2 ( α + 1 ) 4 ( α + 2 ) 2 { ( α + 1 ) ( 2 α 1 ) 2
6 × ( 24 ) 2 ( α + 2 ) 5 ( α + 3 ) 7 ( 2 α + 1 ) ( 2 α + 3 ) ( 2 α + 5 ) } . b = 16 α 2 ( α + 1 ) 4 ( α + 2 ) 2 { 12 × ( 24 ) 2 ( α + 2 ) 5 ( α + 3 ) 7 ( 2 α + 3 ) 2 ( 2 α + 5 )
( α + 1 ) ( 2 α 1 ) 2 ( 2 α + 1 ) } .
Theorem 5. 
Let F be a chaotic random variable given by (36). If α > 0 , one has that,
E [ ( Γ ( F , L 1 F ) 1 ) 2 ] = c α 6 E [ F 4 ] 3 .
Here c α is a positive constant given by
c α = 4 ϑ ( α ) ( 2 α 1 ) 3 ( α + 1 ) ϑ ( α ) ( 2 α + 1 ) ( 2 α 1 ) 2 ( α + 1 ) ,
where
ϑ ( α ) = 6 × ( 24 ) 2 ( α + 2 ) 5 ( α + 3 ) 7 ( 2 α + 3 ) ( 2 α + 5 ) .
Proof. 
When n = 2 in (33), the linearization coefficients c n , k are given by
c 2 , 0 = 2 ( 2 α + 1 ) 2 [ ( α ) 2 ] 2 ,
c 2 , 2 = 8 [ ( α + 1 ) 1 ] 3 ( α ) 3 ( 2 α 1 ) 2 ( 2 α + 1 ) 1 ,
c 2 , 4 = 24 ( α + 2 ) 2 ( α ) 2 ( α ) 4 [ ( 2 α + 1 ) 2 ] 2 ,
and c n , k = 0 if k is odd, where ( x ) n = x ( x + 1 ) ( x + n 1 ) . Note that the general form c n , 0 of (47) is also given by
c n , 0 = n ! ( n + 2 α 1 ) n [ ( α ) n ] 2 .
Since P 0 ( α 1 , α 1 ) ( 1 2 x ) = 1 , we have, from (33), that
0 1 P 2 ( α 1 , α 1 ) ( 1 2 x ) 2 ν ( d x ) = c 2 , 0 .
By orthogonality, we have that
0 1 P 4 ( α 1 , α 1 ) ( 1 2 x ) 2 ν ( d x ) = k = 0 8 c 4 , k 0 1 P k ( α 1 , α 1 ) ( 1 2 x ) ν ( d x ) = c 4 , 0 .
Since c n , k = 0 for k = 1 , 3 and
F = P 2 α 1 , α 1 ( 1 2 x ) c 2 , 0 ,
the intercept of a linear function ϕ can be written, using (51) and (52), as
4 λ 2 2 b = λ 2 ( 2 λ 2 λ 2 ) E [ J 2 ( F 2 ) 2 ] λ 4 ( 2 λ 2 λ 4 ) E [ J 4 ( F 2 ) 2 ] = λ 2 ( 2 λ 2 λ 2 ) c 2 , 2 c 2 , 0 2 0 1 P 2 ( α 1 , α 1 ) ( 1 2 x ) 2 ν ( d x ) λ 4 ( 2 λ 2 λ 4 ) c 2 , 4 c 2 , 0 2 0 1 P 4 ( α 1 , α 1 ) ( 1 2 x ) 2 ν ( d x ) = λ 2 ( 2 λ 2 λ 2 ) c 2 , 2 2 c 2 , 0 λ 4 ( 2 λ 2 λ 4 ) c 2 , 4 c 2 , 0 2 c 4 , 0 .
Using (47)–(50), the right-hand side of (53) can be computed as
4 λ 2 2 b = 64 λ 2 2 α 2 ( α + 1 ) 5 ( α + 2 ) 2 ( 2 α 1 ) 2 ( 2 α + 1 ) + 128 × ( 24 ) 3 α 2 ( α + 1 ) 4 ( α + 2 ) 7 ( α + 3 ) 7 ( 2 α + 1 ) 2 × ( 2 α + 3 ) 2 ( 2 α + 5 ) .
Hence we have
b = 16 α 2 ( α + 1 ) 5 ( α + 2 ) 2 ( 2 α 1 ) 2 ( 2 α + 1 ) + 8 × ( 24 ) 3 α 2 ( α + 1 ) 4 ( α + 2 ) 7 ( α + 3 ) 7 ( 2 α + 3 ) 2 ( 2 α + 5 ) = 16 α 2 ( α + 1 ) 4 ( α + 2 ) 2 { 12 × ( 24 ) 2 ( α + 2 ) 5 ( α + 3 ) 7 ( 2 α + 3 ) 2 ( 2 α + 5 ) ( α + 1 ) ( 2 α 1 ) 2 ( 2 α + 1 ) } .
From (56), we see that b > 0 for α > 0 . Using the same arguments as for the case of b shows that
4 λ 2 m = λ 2 c 2 , 2 2 c 2 , 0 + ( 2 λ 2 λ 4 ) c 2 , 4 c 2 , 0 2 c 4 , 0 = 64 λ 2 α 2 ( α + 1 ) 5 ( α + 2 ) 2 ( 2 α 1 ) 2 ( 2 α + 1 ) 32 × ( 24 ) 3 α 2 ( α + 1 ) 4 ( α + 2 ) 7 ( α + 3 ) 7 ( 2 α + 1 ) 2 × ( 2 α + 3 ) ( 2 α + 5 ) .
So
m = 16 α 2 ( α + 1 ) 5 ( α + 2 ) 2 ( 2 α 1 ) 2 4 × ( 24 ) 3 α 2 ( α + 1 ) 4 ( α + 2 ) 7 ( α + 3 ) 7 ( 2 α + 1 ) × ( 2 α + 3 ) ( 2 α + 5 ) = 16 α 2 ( α + 1 ) 4 ( α + 2 ) 2 { ( α + 1 ) ( 2 α 1 ) 2 6 × ( 24 ) 2 ( α + 2 ) 5 ( α + 3 ) 7 ( 2 α + 1 ) ( 2 α + 3 ) ( 2 α + 5 ) } .
Obviously, the right-hand side of (57) shows that m < 0 for α > 0 . Now, we will find a point x such that ϕ ( x ) = 0 . From (55) and (57), the solution of ϕ ( x ) = 0 is given by
x = b m = 2 ϑ ( α ) ( 2 α + 3 ) ( 2 α 1 ) 2 ( α + 1 ) ( 2 α + 1 ) ϑ ( α ) ( 2 α + 1 ) ( 2 α 1 ) 2 ( α + 1 ) .
Hence, it follows from (58) that
c α = 2 + b m = 4 ϑ ( α ) ( 2 α 1 ) 3 ( α + 1 ) ϑ ( α ) ( 2 α + 1 ) ( 2 α 1 ) 2 ( α + 1 ) .
Remark 2. 
In Theorem 5, we assume that α = β (ultraspherical case). This assumption shows that the factor 9 F 8 being the generalized hypergeometric function with 9 numerators and 8 denominator parameters, given in the paper [19], vanishes, so that Rahman’s formula is considerably simplified. This assumption allows us to find a point x satisfying ϕ ( x ) 0 quickly and explicitly. □

4.3. Romanovski-Routh Generator

We consider the one-dimensional generator L α , β , action on L 2 ( R , μ p , q ) , where
μ p , q ( d x ) = Γ p + i q 2 Γ p i q 2 2 2 ( 1 p ) Γ ( 2 p 1 ) ( x 2 + 1 ) p exp q arctan ( x ) d x ,
defined by
L p , q f ( x ) = x 2 2 ( 1 p ) f ( x ) x q 2 ( p 1 ) f ( x ) .
Its spectrum Λ is of the form
Λ = n 1 + n 1 2 ( 1 p ) : n N 0 .
Corollary 3. 
Let ν be the skew t-distribution with mean q 2 ( p 1 ) and diffusion coefficients given by
b 2 = 1 2 ( p 1 ) a n d b 0 = 1 2 ( p 1 ) .
Let F = R n p , q ( x ) , p , q R and n 2 and set
G = F + q 2 ( p 1 ) f o r p , q R .
If p > 2 n + 1 2 , then we have
E Γ ( G , L 1 G ) 1 2 ( p 1 ) G 2 1 2 ( p 1 ) 2 2 2 p 3 2 ( p 1 ) 4 n 4 p + 3 4 ( n 2 p + 1 ) 2 p 5 6 ( p 1 ) × E [ U ˜ ( G ) ] + 2 p 3 4 ( p 1 ) 1 4 ( n 2 p + 1 ) × 2 ( 2 n 2 p + 1 ) ( n 2 p + 1 ) 2 p 3 p 1 E [ Q 2 ( G ) ] ,
where the constants b 2 , b 1 , b 0 and m in U ˜ ( x ) and Q ( x ) are given by
b 2 = 1 2 ( p 1 ) , b 1 = 0 , b 0 = 1 2 ( p 1 ) a n d m = q 2 ( p 1 ) .
Proof. 
First note that the Romanovski-Routh polynomials R n ( p , q ) ( x ) can be represented by complexified Jacobi polynomials:
R n ( p , q ) ( x ) = n ! ( 2 i ) n P n ( p + q i 2 , p q i 2 ) ( i x ) ,
where P n ( p , q ) ( x ) are the well-known Jacobi polynomials and i = 1 . By using (33) and (63), the square of a Ramanovski-Routh polynomial R n ( p . q ) ( x ) can be expressed as a linear combination of Jacobi polynomials up to order 2 n as follows:
R n ( p , q ) ( x ) 2 = ( n ! ) 2 ( 2 i ) 2 n P n ( p + q i 2 , p q i 2 ) ( i x ) 2 = ( n ! ) 2 ( 4 ) n k = 0 2 n c n , k P k ( p + q i 2 , p q i 2 ) ( i x ) = ( n ! ) 2 ( 4 ) n k = 0 2 n c n , k k ! ( 2 i ) k R k ( p , q ) ( x ) .
where the linearization coefficients c n , k are explicitly given in the paper [5]. By Proposition 4.2 in [3], the random variable F is chaotic. This product formula (64) shows that the upper chaos grade  u and the lower chaos grade e of R n ( p , q ) are
u = λ 2 n λ n = 2 n 1 + 2 n 1 2 ( 1 p ) n 1 + n 1 2 ( 1 p ) = 2 ( 2 n 2 p + 1 ) n 2 p + 1 ,
e = λ 1 λ n = 1 n 1 + n 1 2 ( 1 p ) = 1 n 2 p + 1 .
Hence it follows, from (65) and (66) together with b 2 = 1 2 ( p 1 ) , that
E Γ ( G , L 1 G ) 1 2 ( p 1 ) G 2 1 2 ( p 1 ) 2 2 2 p 3 2 ( p 1 ) 4 n 4 p + 3 4 ( n 2 p + 1 ) 2 p 5 6 ( p 1 ) × E [ U ˜ ( G ) ] + 2 p 3 4 ( p 1 ) 1 4 ( n 2 p + 1 ) × 2 ( 2 n 2 p + 1 ) ( n 2 p + 1 ) 2 p 3 p 1 E [ Q 2 ( G ) ] ,
where the constants b 2 , b 1 , b 0 and m in U ˜ ( x ) and Q ( x ) are given by
b 2 = 1 2 ( p 1 ) , b 1 = 0 , b 0 = 1 2 ( p 1 ) a n d m = q 2 ( p 1 ) .

5. Conclusions and Future Works

The motivation of this study is that the bound in (1) provides a better estimate for four moments theorem in comparison with bound of (8) in the case of F = I q ( f ) . We need to develope a new method for obtaining a more improved bound than the bound given in [3]. For this, we find the largest number except zero in the set of eigenvalues corresponding to its eigenfunction in the case where the square of a random variable F, coming from a Markov triple structure, can be expressed as a sum of eigenfunctions,
Future works will be carried out in two directions: (1) We will develop a new technique that can show that the fourth moment theorem like Theorem 4 holds even when the target distribution is not Gaussian. (2) We will study how the second term of the bound (16) in Theorem 3 can be removed even though the chaos grade is greater than two

Funding

This research was supported by Hallym University Research Fund (HRF-202302-007).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Azmoodeh, E.; Campese, S. Poly Fourth moment theorems for Markov diffusion generators. J. Funct. Anal. 2014, 266, 2341–2359. [Google Scholar] [CrossRef]
  2. Bakry, D.; Gentil, I.; Ledoux, M. Analysis and Geometry of Markov Diffusion Operators. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Science ]; 348. Springer: Cham. MR3155209, 2014. [Google Scholar]
  3. Bourguin, S.; Campese, S.; Leonenko, N.; Taqqu, M.S. Four moment theorems on Markov chaos. Ann. probab. 2019, 47, 1417–1446. [Google Scholar] [CrossRef]
  4. Chen, L. H. Y.; Goldstein, L.; Shao, Q-M. Normal apprtoximation by Stein’s method; Probability and its Applications (New York); Springer: Heidelberg, 2011. [Google Scholar]
  5. Chaggara, H.; Koef, W. On linearization coefficients of jacobi polynomials. Appl. Math. Lett. 2010, 23, 609–614. [Google Scholar] [CrossRef]
  6. Kim, Y.T.; Park, H.S. An Edeworth expansion for functionals of Gaussian fields and its applications. Stoch. Proc. Appl. 2018, 44, 312–320. [Google Scholar]
  7. Kim, Y.T.; Park, H.S. Normal approximation when a chaos grade is greater than two. Prob Stat Lett. 2022, 44, 312–320. [Google Scholar] [CrossRef]
  8. Ledoux, M. Chaos of a Markov operator and the fourth moment condition, Ann. Probab. 2012, 40, 2439–2459. [Google Scholar] [CrossRef]
  9. Nourdin, I. Lectures on Gaussian approximations with Malliavin calculus. In Séminaire de Probabilités XLV; Lecture Notes in Mathematics 2078; 2013. [Google Scholar] [CrossRef]
  10. Nourdin, I.; Peccati, G. Stein’s method on Wiener Chaos. Probab. Theory Relat. Fields 2009, 145, 75–118. [Google Scholar] [CrossRef]
  11. Nourdin, I.; Peccati, G. Stein’s method and exact Berry-Esseen asymptotics for functionals of Gaussian fields. Ann. Probab. 2009, 37, 2231–2261. [Google Scholar] [CrossRef]
  12. Nourdin, I.; Peccati, G. Stein’s method meets Malliavin calculus: A short survey with new estimates. In Recent development in Stochastic dynamics and stochasdtic analysis, Volume 8 of Interdiscip. math. SCi. 207–236 World Sci. Publ.: Hackensack, 2010.
  13. Nourdin, I.; Peccati, G. Normal Approximations with Malliavin Calculus: From Stein’s Method to Universality; Cambridge Tracts in mathematica, Volume 192, Cambridge University Press, Cambridge, 2012.
  14. Nourdin, I.; Peccati, G. The optimal fourth moment theorem. Proc. Amer. Math. Soc.. 2015, 143, 3123–3133. [Google Scholar] [CrossRef]
  15. Nualart, D. Malliavin calculus and related topics. Probability and its Applications, 2nd ed.; Springer: Berlin, 2006. [Google Scholar]
  16. Nualart, D. Malliavin Calculus and Its Applications; Regional conference series in Mathmatics Number 110, 2008.
  17. Nualart, D.; Ortiz-Latorre, S. Central limit theorems for multiple stochastic integrals and malliavin calculus. Ann. Probab. 2008, 33, 177–193. [Google Scholar] [CrossRef]
  18. Nualart, D.; Peccati, G. Central limit theorems for sequences of multiple stochastic integrals. Ann. Probab., 2005, 33, 177–193. [Google Scholar] [CrossRef]
  19. Rahman, M. A non-negative representation of the linearizatioon coefficients of the product of Jacobi polynomials. Canad. J. Math. 1981, 33, 915–928. [Google Scholar] [CrossRef]
  20. Stein, C. A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. In Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probabiltiy; Volume II: Probability Theory, 583–602. University of California Press: Berkeley, California, 1972. [Google Scholar]
  21. Stein, C. Approximate Computation of Expectations; IMS, Hayward, CA. MR882007, 1986.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated