Preprint
Article

Jeffreys Divergence and Generalized Fisher Information Measures on Fokker-Planck Space-Time Random Field

Altmetrics

Downloads

101

Views

37

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

24 August 2023

Posted:

24 August 2023

You are already at the latest version

Alerts
Abstract
In this paper, we derive Jeffreys divergence, generalized Fisher divergence and corresponding De Bruijn identities on space-time random field. First, we determine the relation between Fisher information on the space-time random field in one of the space-time points and the ratio of Jeffreys divergence on a space-time random field at distinct space-time positions to the square of coordinate difference. In addition, we also find identities between the partial derivative of the Jeffreys divergence and the generalized Fisher divergence with respect to space-time variables, i.e. the De Bruijn identities, between two space-time random fields obtained by different parameters under the same Fokker-Planck equations. At the end of this paper, we present three examples of the Fokker-Planck equations on space-time random fields, identify their density functions, and derive the Jeffreys divergence, generalized Fisher information, generalized Fisher divergence, and accompanying De Bruijn identities.
Keywords: 
Subject: Computer Science and Mathematics  -   Applied Mathematics

1. Introduction

Information entropy and Fisher information are quantities to measure random information, and entropy divergence is derived from information entropy to measure the difference between two probability distributions. Formally, we can construct straightforward definitions of entropy divergence and Fisher information for the case of a space-time random field found on classical definitions. The density function, in their definitions, can be obtained in many different ways. In this paper, the density function of a space-time random field is obtained by Fokker-Planck equations. The traditional Fokker-Planck equation is a partial differential equation that describes the probability density function of a random process[1]. It describes the density function’s time-varying change rule. However, Fokker-Planck equations for random fields, particularly space-time random fields, does not have a clear form so far. The classical equation needs to be generalized because the variable varies from time to space-time.
In this paper, we mainly obtain the relation between Jeffreys divergence and generalized Fisher information for space-time random field generated by Fokker-Planck equations. Jeffreys divergence is a symmetric entropy divergence, which is generalized from Kullback-Leibler divergence (KL divergence). In information theory and statistics, Jeffreys divergence is often used to measure the distance between predicted and true distributions, but its drawback is that when there is less overlap between the two distributions, the result is infinity. In order to avoid the divergence results, we consider the relationship between Jeffreys divergence and generalized Fisher information for space-time random field with small differences in space-time parameters.
Moreover, the classical De Bruijn identity describes the relationship between differential entropy and Fisher information of Gaussian channel[2], and it can be generalized other cases[3,4,5,6,7]. Thanks their works and following their ideas, we obtain De Bruijn identities on Jeffreys divergence and generalized Fisher information of two space-time random fields whose density function satisfies Fokker-Planck equations.

1.1. Space-time random field

Random field was first studied by Kolmogorov[8,9,10] and it was gradually improved by Yaglom[11,12,13] in the middle of last century. A random field with n N + variables can be expressed as
X ( t 1 , t 2 , , t n )
where ( t 1 , t 2 , , t n ) R n . We call (1) as a generalized random field, or a multiparameter stochastic process. In some practical applications, we often use the concept of space-time random field. The space-time random field on a d-dimensional space is expressed as
X ( t , x )
where ( t , x ) R + × R d are the space-time variables. It has many applications in statistics, finance, signal processing, stochastic partial differential equations and other fields[14,15,16,17,18,19,20,21,22,23,24,25,26,27].

1.2. Kramers-Moyal expansion and Fokker-Planck equation

In the literature of stochastic processes, Kramers-Moyal expansion refers to a Taylor series of the master equation, named after Kramers and Moyal[28,29]. The Kramers-Moyal expansion is an infinite order partial differential equation
p ( u , t ) t = n = 1 ( 1 ) n n ! n u n K n ( u , t ) p ( u , t )
where p ( u , t ) is the density function and
K n ( u , t ) = R u u n W ( u | x , t ) d u
is the n-order conditional moment. Here W ( u | u , t ) is the transition probability rate. The Fokker-Planck equation is obtained by keeping only first two terms of the Kramers-Moyal expansion. In statistical mechanics, Fokker-Planck equation is usually describes the time evolution of probability density function of the velocity of a particle under the influence of drag forces and random forces, as in famous Brownian motion. And this equation is often used to find the density function of Itô stochastic differential equation[1].

1.3. Differential entropy and De Bruijn identity

The entropy of a continuous distribution was proposed by Shannon in 1948, known as differential entropy[30]:
h X = R d p ( x ) log p ( x ) d x
where h ( · ) represent the differential entropy and p ( · ) is probability density function of X. However, differential entropy is not easy to calculate and seldom exists. There have been related studies on the entropy of stochastic processes and continuous systems[31,32,33,34]. If we consider a classical one-dimensional Gaussian channel model
Y t = X + t G
here X is the input signal, G is standard Gaussian noise, t 0 is the strength and Y t is the output, we can obtain the density of Y t satisfies the following Fokker-Planck equation
p ( y , t ) t = 1 2 2 p ( y , t ) y 2
Further, by calculating the differential entropy of Y t and obtaining its derivative to t, we can get
d h Y t ( t ) d t = 1 2 F I Y t ( t )
where
F I Y t ( t ) = R log p ( y , t ) y 2 p ( y , t ) d y
is the Fisher information of Y t . The equation (8) here is the De Burijn identity. The de Bruijn identity connects the differential entropy h ( · ) and the Fisher information F I ( · ) , which shows that they are different aspects of the concept of “information”.

1.4. Entropy Divergence

In information theory and statistics, an entropy divergence is a statistical distance generated from information entropy to measure the difference between two probability distributions. There are various divergences generated by information entropy, such as Kullback-Leibler divergence[35], Jeffreys divergence[36], Jensen-Shannon divergence[37], Rényi divergence[38], etc. These measures have been applied in a variety of fields such as finance, economics, biology, signal processing, pattern recognition and machine learning[39,40,41,42,43,44,45,46,47,48,49]. In this paper, we mainly focus on the Jeffreys divergence of two distributions, formed as
J D P , Q = R p ( x ) q ( x ) log p ( x ) q ( x ) d μ ( x )
where μ is a measure of x.

2. Notations, Definitions and Propositions

2.1. Notations and Assumptions

In this paper, we use the following notations and definitions:
  • Given a probability space ( Ω , F , P ) , two real valued space-time random fields are denoted as X ( ω ; t , x ) , Y ( ω ; s , y ) or X ( t , x ) , Y ( s , y ) , where ω Ω and ( t , x ) ( s , y ) R + × R d , d N + , are space-time variables.
  • The probability density functions of P and Q are denoted as p and q. With u R , p ( u ; t , x ) is the density value at ( t , x ) of X and q ( u ; s , y ) is the density value at ( s , y ) of Y.
  • Suppose that our density functions p ( u ; t , x ) , q ( u , s , y ) C 2 ( R ) × C 1 ( R + × R d ) , i.e. p ( u ; t , x ) and q ( u ; s , y ) are derivable twice with respect to u and once with respect to ( t , x ) or ( s , y ) , respectively.
  • In this paper, we denote that only the k-th coordinate differs in the d-dimensional real vectors as x ˜ k and y ˜ k , where the k-th coordinates are x k and y k , k = 1 , 2 , , d .

2.2. Definitions

To obtain the generalized De Bruijn identities between Jeffreys divergence and Fisher divergence, we need to introduce some new definitions and propositions.
The first and important information quantity is the Kullback-Leibler divergence for random fields. Same as the classical Kullback-Leibler divergence, we can easily get the Definition 1.
Definition 1: The Kullback-Leibler divergence between two space-time random fields X ( t , x ) and Y ( s , y ) , ( t , x ) , ( s , y ) R + × R d , with density functions p ( u ; t , x ) and q ( u ; s , y ) , is defined as
K L P ( t , x ) Q ( s , y ) = R p ( u ; t , x ) log p ( u ; t , x ) q ( u ; s , y ) d u
Similar to the classical Kullback-Leibler divergence, Kullback-Leibler divergence on random fields is not symmetrical, i.e.
K L P ( t , x ) Q ( s , y ) K L Q ( s , y ) P ( t , x )
Following the classical definition of Jeffreys divergence on two random variables, we mainly consider Jeffreys divergence for random fields in this paper.
Definition 2: The Jeffreys divergence divergence between two space-time random fields X ( t , x ) and Y ( s , y ) , ( t , x ) , ( s , y ) R + × R d , with density function p ( u ; t , x ) and q ( u ; s , y ) is defined as
J D P ( t , x ) , Q ( s , y ) = K L P ( t , x ) Q ( s , y ) + K L Q ( s , y ) P ( t , x )
Here, we replaced || by , in the distortion measure to emphasize the symmetric property.
Another significant measure of information is Fisher information. In this paper, we consider the generalized Fisher information of space-time random field.
Definition 3: The Generalized Fisher information of space-time random field X ( t , x ) , ( t , x ) R + × R d , with density function p ( u ; t , x ) defined by nonnegative function f ( · ) are formed as
F I f P ( t , x ) = R f ( u ) u log p ( u ; t , x ) 2 p ( u ; t , x ) d u
In particular, as f 1 , F I 1 ( t , x ) is the Fisher information in usual case. In addition to equation (14), we have other similar forms of generalized Fisher information
F I f ( t ) P ( t , x ) = R f ( u ) t log p ( u ; t , x ) 2 p ( u ; t , x ) d u
and
F I f ( x k ) P ( t , x ) = R f ( u ) x k log p ( u ; t , x ) 2 p ( u ; t , x ) d u
for k = 1 , 2 , , d . Obviously, (15) and (16) are generalized Fisher information on space-time variables. Regarding the generalized Fisher information (14), we can come to a following simple proposition.
Proposition 4: For arbitrary non-negative function f ( · ) , we assume the generalized Fisher information of random variable X
F I f ( X ) : = R f ( x ) d log p X ( x ) d x 2 p X ( x ) d x
is well defined, where p X ( x ) represents the probability density. Then we have the generalized Fisher information inequality
1 F I f ( X + Y ) 1 F I f ( X ) + 1 F I f ( Y )
When f 1 , F I 1 ( X ) is the Fisher information in usual case.
Proof Denote Z = X + Y , p X , p Y and p Z represent densities i.e.
p Z ( z ) = R p X ( x ) p Y ( z x ) d x
and derivative function
p Z ( z ) = R p X ( x ) p Y ( z x ) d x
If p X , p Y and p Z are never vanish,
p Z ( z ) p Z ( z ) = R p X ( x ) p Y ( z x ) p Z ( z ) p X ( x ) p X ( x ) d x = E p X ( x ) p X ( x ) | z
is the conditional expectation of p X ( x ) p X ( x ) for given z. Similarly, we can obtain
p Z ( z ) p Z ( z ) = E p Y ( y ) p Y ( y ) | z
and μ , λ R , we find that
E μ p X ( x ) p X ( x ) + λ p Y ( y ) p Y ( y ) | z = ( μ + λ ) p Z ( z ) p Z ( z )
Hence, we have
( μ + λ ) p Z ( z ) p Z ( z ) 2 = E μ p X ( x ) p X ( x ) + λ p Y ( y ) p Y ( y ) | z 2 E μ p X ( x ) p X ( x ) + y p Y ( y ) p Y ( y ) 2 | z
with equality only if
μ p X ( x ) p X ( x ) + λ p Y ( y ) p Y ( y ) = ( μ + λ ) p Z ( z ) p Z ( z )
with probability 1 whenever z = x + y and we have
f ( z ) ( μ + λ ) p Z ( z ) p Z ( z ) 2 f ( z ) E μ p X ( x ) p X ( x ) + y p Y ( y ) p Y ( y ) 2 | z
Averaging both sides over the distribution of z
E f ( z ) ( μ + λ ) p Z ( z ) p Z ( z ) 2 E f ( z ) E μ p X ( x ) p X ( x ) + y p Y ( y ) p Y ( y ) 2 | z = μ 2 E f ( z ) E p X ( x ) p X ( x ) 2 | z + λ 2 E E f ( z ) p Y ( y ) p Y ( y ) 2 | z
i.e.
( μ + λ ) 2 J f ( X + Y ) μ 2 J f ( X ) + λ 2 J f ( Y )
Setting μ = 1 J f ( X ) and λ = 1 J f ( Y ) , we obtain
1 F I f ( X + Y ) 1 F I f ( X ) + 1 F I f ( Y )
According to Definition 3, we can get relevant generalized definitions of generalized Fisher information in general.
Definition 5: The Generalized cross Fisher information for space-time random fields X ( t , x ) and Y ( s , y ) , ( t , x ) , ( s , y ) R + × R d , with density functions p ( u ; t , x ) and q ( u ; s , y ) , defined by nonnegative function f ( · ) is formed as
C F I f ( P ( t , x ) , Q ( s , y ) ) = R f ( u ) u log q ( u ; s , y ) 2 p ( u ; t , x ) d u
Similar to the concept of cross-entropy, it’s easily to verify that (30) is symmetrical about P and Q.
Definition 6: The Generalized Fisher divergence for space-time random fields X ( t , x ) and Y ( s , y ) , for ( t , x ) , ( s , y ) R + × R d , with density functions p ( u ; t , x ) and q ( u ; s , y ) , defined by nonnegative function f ( · ) is formed as
F D f P ( t , x ) Q ( s , y ) = R f ( u ) u log p ( u ; t , x ) u log q ( u ; s , y ) 2 p ( u ; t , x ) d u
In particular, as f 1 , F D 1 P ( t , x ) Q ( s , y ) is the Fisher divergence in usual case.
Obviously, the generalized Fisher divergence of random fields is not a symmetrical divergence. To get the symmetrical formula, we need to generalize (31) in another field.
Definition 7: The Generalized Fisher divergence for space-time random fields X ( t , x ) and Y ( s , y ) , for ( t , x ) , ( s , y ) R + × R d , with density functions p ( u ; t , x ) and q ( u ; s , y ) , defined by nonnegative functions f ( · ) and g ( · ) is formed as
F D ( f , g ) P ( t , x ) Q ( s , y ) = R f ( u ; t , x ) u log p ( u ; t , x ) g ( u ; s , y ) u log q ( u ; s , y ) × u log p ( u ; t , x ) u log q ( u ; s , y ) p ( u ; t , x ) + q ( u , s , y ) d u
In particular, as f g , F D ( f , f ) P ( t , x ) Q ( s , y ) is the generalized Fisher divergence for random fields by one function. In general, F D ( f , g ) P ( t , x ) Q ( s , y ) is asymmetric with respect to P and Q, i.e.
F D ( f , g ) P ( t , x ) Q ( s , y ) F D ( f , g ) Q ( s , y ) P ( t , x )
If we suppose that f and g are functions only related to P and Q, i.e.
f ( u ; t , x ) = T p ( t , x ) ( u ) g ( u ; s , y ) = T q ( s , y ) ( u )
where T is a operator, the generalized Fisher divergence F D ( f , g ) P ( t , x ) Q ( s , y ) can be rewrite as
F D ( f , g ) P ( t , x ) Q ( s , y ) = R T p ( t , x ) ( u ) u log p ( u ; t , x ) T q ( s , y ) ( u ) u log q ( u ; s , y ) × u log p ( u ; t , x ) u log q ( u ; s , y ) p ( u ; t , x ) + q ( u , s , y ) d u
and we can easily get
F D ( f , g ) P ( t , x ) Q ( s , y ) = F D ( g , f ) Q ( t , x ) P ( s , y )
In this case, we call (35) symmetric Fisher divergence for random fields generated by operator T and denote it as
s F D T P ( t , x ) , Q ( s , y ) = R T p ( t , x ) ( u ) u log p ( u ; t , x ) T q ( s , y ) ( u ) u log q ( u ; s , y ) × u log p ( u ; t , x ) u log q ( u ; s , y ) p ( u ; t , x ) + q ( u , s , y ) d u
Notice that
A a B b = 1 2 × 2 A a B b = 1 2 A a A b + A b B b + A a B a + B a B b = 1 2 A + B a b + A B a + b
for A, B, a, b R , then we can rewrite (37) as
s F D T P ( t , x ) , Q ( s , y ) = 1 2 R T p ( t , x ) ( u ) + T q ( s , y ) ( u ) u log p ( u ; t , x ) u log q ( u ; s , y ) 2 × p ( u ; t , x ) + q ( u , s , y ) d u + 1 2 R T p ( t , x ) ( u ) T q ( s , y ) ( u ) u log p ( u ; t , x ) 2 u log q ( u ; s , y ) 2 × p ( u ; t , x ) + q ( u , s , y ) d u = 1 2 F D T p ( t , x ) + T q ( s , y ) P ( t , x ) Q ( s , y ) + F D T p ( t , x ) + T q ( s , y ) Q ( s , y ) P ( t , x ) + 1 2 F I T p ( t , x ) T q ( s , y ) P ( t , x ) + F I T p ( t , x ) T q ( s , y ) Q ( s , y ) + 1 2 C F I T p ( t , x ) T q ( s , y ) Q ( s , y ) , P ( t , x ) C F I T p ( t , x ) T q ( s , y ) P ( t , x ) , Q ( s , y )
Proposition 8 (Kramers-Moyal expansion)[28,29]: Suppose that the random process X ( t ) has any order moment, then the probability density function p ( u , t ) satisfies the Kramers-Moyal expansion
p ( u , t ) t = n = 1 ( 1 ) n n ! n u n K n ( u , t ) p ( u , t )
where
K n ( u , t ) = R u u n W ( u | u , t ) d u
is the n-order conditional moment. Here W ( x | x , t ) is the transition probability rate.
Proposition 9 (Pawula theorem)[50,51]: If the limit on conditional moment of random process X ( t )
lim Δ t 0 1 Δ t E X ( t + Δ t ) X ( t ) n | X ( t ) = x
exists for all n N + , and the limit value equals 0 for some even number, then the limit values are 0 for all n 3 .
Pawula theorem states that there are only three possible cases in the Kramers-Moyal expansion:
1) The Kramers-Moyal expansion is truncated at n = 1 , meaning that the process is deterministic;
2) The Kramers-Moyal expansion stops at n = 2 , with the resulting equation being the Fokker-Planck equation, and describes diffusion processes;
3) The Kramers-Moyal expansion contains all the terms up to n = .
In this paper, we focus on the case of Fokker-Planck equation.

3. Main Results and Proofs

Lemma 1: Suppose f, p, q are continuous derivable functions defined on R . Then
R f ( u ) d log p ( u ) d u d log q ( u ) d u p ( u ) + q ( u ) d u = R f ( u ) p ( u ) q ( u ) d u
holds true if it makes sense.
Proof:
R f ( u ) d log p ( u ) d u d log q ( u ) d u p ( u ) + q ( u ) d u = R f ( u ) p ( u ) d log p ( u ) d u d u + R f ( u ) q ( u ) d log p ( u ) d u d u R f ( u ) p ( u ) d log q ( u ) d u d u R f ( u ) q ( u ) d log q ( u ) d u d u = R f ( u ) p ( u ) q ( u ) d u + R f ( u ) q ( u ) p ( u ) p ( u ) p ( u ) q ( u ) q ( u ) d u
Notice that
d d u p ( u ) q ( u ) = 1 q ( u ) p ( u ) p ( u ) q ( u ) q ( u ) d d u q ( u ) p ( u ) = 1 p ( u ) q ( u ) q ( u ) p ( u ) p ( u )
i.e.
p ( u ) q ( u ) q ( u ) = p ( u ) q ( u ) d d u p ( u ) q ( u ) q ( u ) p ( u ) p ( u ) = q ( u ) p ( u ) d d u q ( u ) p ( u )
then
R f ( u ) q ( u ) p ( u ) p ( u ) p ( u ) q ( u ) q ( u ) d u = R f ( u ) p ( u ) q ( u ) d d u p ( u ) q ( u ) d u R f ( u ) q ( u ) p ( u ) d d u q ( u ) p ( u ) d u = R f ( u ) p ( u ) q ( u ) d u R f ( u ) q ( u ) d p ( u ) q ( u ) + R f ( u ) p ( u ) d q ( u ) p ( u ) = R f ( u ) p ( u ) q ( u ) d u + R f ( u ) q ( u ) + f ( u ) q ( u ) p ( u ) q ( u ) d u R f ( u ) p ( u ) + f ( u ) p ( u ) q ( u ) p ( u ) d u = R f ( u ) p ( u ) q ( u ) d u + R f ( u ) q ( u ) p ( u ) d u R f ( u ) q ( u ) p ( u ) p ( u ) p ( u ) q ( u ) q ( u ) d u = R d f ( u ) q ( u ) p ( u ) R f ( u ) q ( u ) p ( u ) p ( u ) p ( u ) q ( u ) q ( u ) d u = R f ( u ) q ( u ) p ( u ) p ( u ) p ( u ) q ( u ) q ( u ) d u
So we can get the result
R f ( u ) d log p ( u ) d u d log q ( u ) d u p ( u ) + q ( u ) d u = R f ( u ) p ( u ) q ( u ) d u
Lemma 2: Suppose that the Fokker-Planck equation for the density function p ( u , t ) is
p ( u , t ) t = 1 2 2 b ( u , t ) p ( u , t ) u 2 a ( u , t ) p ( u , t ) u
We use variable substitution u = u ( v ) or v = v ( u ) to convert the equation to
p ( v , t ) t = E ( t ) 2 2 p ( v , t ) u 2 F ( t ) p ( v , t ) u + G ( t ) p ( v , t )
where E ( t ) , F ( t ) , and G ( t ) are functions of t and E ( t ) > 0 . Then we get the density
p ( u , t ) = e 0 t G ( s ) d s 2 π 0 t E ( s ) d s e v ( u ) v u 0 0 t F ( s ) d s 2 2 0 t E ( s ) d s
Proof: Using Fourier transform, we can get the solution of Fokker-Planck equation (50) about v
p ( v , t ) = e 0 t G ( s ) d s 2 π 0 t E ( s ) d s e v v 0 ( x ) 0 t F ( s ) d s 2 2 0 t E ( s ) d s
It is worth noting that while p is a probability density function for u, and p is not a probability density function for v. Our transformation, u = u ( v ) or v = v ( u ) , is only introduced in the process of solving the equation, which does not mean that v is a random process. Therefore, the integral value of p with respect to v is not 1. But we can obtain
p ( u , t ) = e 0 t G ( s ) d s 2 π 0 t E ( s ) d s e v ( u ) v u 0 0 t F ( s ) d s 2 2 0 t E ( s ) d s
and
I u e 0 t G ( s ) d s 2 π 0 t E ( s ) d s e v ( u ) v u 0 0 t F ( s ) d s 2 2 0 t E ( s ) d s d u = 1
where I u is the supporting set of p ( u , t ) with respect to u.
Theorem 3: The probability density function p ( u ; t , x ) of space-time random field X ( t , x ) , u R , ( t , x ) R + × R d , satisfies the following Fokker-Planck equations
t p ( u ; t , x ) = 1 2 2 u 2 b 0 ( u ; t , x ) p ( u , t , x ) u a 0 ( u ; t , x ) p ( u ; t , x ) x k p ( u ; t , x ) = 1 2 2 u 2 b k ( u ; t , x ) p ( u , t , x ) u a k ( u ; t , x ) p ( u ; t , x ) k = 1 , 2 , , d
where
a 0 ( u ; t , x ) = lim Δ t 0 1 Δ t M 1 ( u ; t , Δ t , x ) b 0 ( u ; t , x ) = lim Δ t 0 1 Δ t M 2 ( u ; t , Δ t , x ) a k ( u ; t , x ) = lim Δ x k 0 1 Δ x k M 1 ( u ; t , x , Δ x k ) b k ( u ; t , x ) = lim Δ x k 0 1 Δ x k M 2 ( u ; t , x , Δ x k ) k = 1 , 2 , , d
here
M n ( u ; t , Δ t , x ) = E X ( t + Δ t , x ) X ( t , x ) n | X ( t , x ) = u M n ( u ; t , x , Δ x k ) = E X ( t , x + Δ x k ) X ( t , x ) n | X ( t , x ) = u
are n-order conditional moments and e 1 , e 2 , , e d is the orthonormal basis of the standard identity of R d .
Proof: Δ t 0 , we can get the difference of density function in time variable
p ( u ; t + Δ t , x ) p ( u ; t , x ) = n = 1 + ( 1 ) n n ! n u n M n ( u ; t , Δ t , x ) p ( u ; t , x )
where
M n ( u ; t , Δ t , x ) = E X ( t + Δ t , x ) X ( t , x ) n | X ( t , x ) = u
is the n-order conditional moment. Then the partial derivative of the density function with respect to t is
t p ( u ; t , x ) = lim Δ t 0 1 Δ t n = 1 + ( 1 ) n n ! n u n M n ( u ; t , Δ t , x ) p ( u ; t , x )
The Pawula theorem implies that if the Kramers-Moyal expansion stops after the second term, we get the Fokker-Planck equation about time variable t
t p ( u ; t , x ) = 1 2 2 u 2 b 0 ( u ; t , x ) p ( u , t , x ) u a 0 ( u ; t , x ) p ( u ; t , x )
where
a 0 ( u ; t , x ) = lim Δ t 0 1 Δ t M 1 ( u ; t , Δ t , x ) b 0 ( u ; t , x ) = lim Δ t 0 1 Δ t M 2 ( u ; t , Δ t , x )
Similarly, we may consider the increment Δ x k of the space variable x k and we can obtain the Fokker-Planck equations about x k is
x k p ( u ; t , x ) = 1 2 2 u 2 b k ( u ; t , x ) p ( u , t , x ) u a k ( u ; t , x ) p ( u ; t , x )
where
a k ( u ; t , x ) = lim Δ x k 0 1 Δ x k M 1 ( u ; t , x , Δ x k ) b k ( u ; t , x ) = lim Δ x k 0 1 Δ x k M 2 ( u ; t , x , Δ x k )
here
M n ( u ; t , x , Δ x k ) = E X ( t , x + Δ x k e k ) X ( t , x ) n | X ( t , x ) = u
e 1 , e 2 , , e d is the orthonormal basis of the standard identity of R d , k = 1 , 2 , , d .
Theorem 4: Suppose that p ( u ; t , x ) and p ( u ; s , y ) are rapidly decreasing functions of space-time random field X ( t , x ) such that the partial integral terms are 0, ( t , x ) , ( s , y ) R + × R d . If there are two infinitesimal ε ( t , x , s , y ) > 0 and η ( t , x , s , y ) > 0 satisfied
lim | t s | 0 ε ( t , x , s , y ) = lim | t s | 0 η ( t , x , s , y ) = 0 lim | x k y k | 0 ε ( t , x , s , y ) = lim | x k y k | 0 η ( t , x , s , y ) = 0 k = 1 , 2 , , d
where x k , y k are the k-th coordinate of x, y R , such that
1 ε ( t , x , s , y ) < p ( u ; t , x ) p ( u ; s , y ) < 1 + η ( t , x , s , y )
is true to all meaningful u R . Then we have
1 1 + η ( t , x , s , y ) + λ ( t s ) < J D P ( t , x ) , P ( s , x ) | t s | 2 F I 1 s X ( s , x ) < 1 1 ε ( t , x , s , y ) + λ ( t s ) 1 1 + η ( t , x , s , y ) + μ k ( x k y k ) < J D P ( t , x ˜ k ) , P ( t , y ˜ k ) | x k y k | 2 F I 1 y k X ( t , y ˜ k ) < 1 1 ε ( t , x , s , y ) + μ k ( x k y k )
here F I 1 s and F I 1 y k are generalized Fisher information on space-time variables and
lim | t s | 0 λ ( t s ) = 0 lim | x k y k | 0 μ k ( x k y k ) = 0
are infinitesimal on space-time variables, k = 1 , 2 , , d .
Proof: First, we consider J D P ( t , x ) , P ( s , x ) / | t s | 2 . Notice that
log p ( u ; t , x ) log p ( u ; s , x ) = 1 γ p ( u ; t , x ) + ( 1 γ ) p ( u ; s , x ) p ( u ; t , x ) p ( u ; s , x ) = 1 γ p ( u ; t , x ) + ( 1 γ ) p ( u ; s , x ) s p ( u ; s , x ) ( t s ) + o ( t s )
then we can get
J D P ( t , x ) , P ( s , x ) | t s | 2 = 1 | t s | 2 R s p ( u ; s , x ) ( t s ) + o ( t s ) 2 1 γ p ( u ; t , x ) + ( 1 γ ) p ( u ; s , x ) d u = 1 | t s | 2 R s log p ( u ; s , x ) ( t s ) + 1 p ( u ; s , x ) o ( t s ) 2 p ( u ; s , x ) × p ( u ; s , x ) γ p ( u ; t , x ) + ( 1 γ ) p ( u ; s , x ) d u = R s log p ( u ; s , x ) + 1 p ( u ; s , x ) o ( t s ) | t s | 2 p ( u ; s , x ) 1 γ p ( u ; t , x ) p ( u ; s , x ) + ( 1 γ ) d u
where 0 < γ < 1 .
Recall the conditions, there are two infinitesimal ε ( t , x , s , y ) , η ( t , x , s , y ) > 0 and
lim | t s | 0 ε ( t , x , s , y ) = lim | t s | 0 η ( t , x , s , y ) = 0
such that
1 ε ( t , x , s , y ) < p ( u ; t , x ) p ( u ; s , y ) < 1 + η ( t , x , s , y )
is true to all meaningful u R , then we can get
1 1 + η ( t , x , s , y ) < 1 γ 1 + η ( t , x , s , y ) + ( 1 γ ) < 1 γ p ( u ; t , x ) p ( u ; s , x ) + ( 1 γ ) < 1 γ 1 ε ( t , x , s , y ) + ( 1 γ ) < 1 1 ε ( t , x , s , y )
so there exits infinitesimal λ ( t s ) as | t s | 0 such that
1 1 + η ( t , x , s , y ) + λ ( t s ) < J D P ( t , x ) , P ( s , x ) | t s | 2 F I 1 s X ( s , x ) < 1 1 ε ( t , x , s , y ) + λ ( t s )
Similarly, we can get the bounds of difference quotient on Jeffreys divergence for space coordinates
1 1 + η ( t , x , s , y ) + μ k ( x k y k ) < J D P ( t , x ˜ k ) , P ( t , y ˜ k ) | x k y k | 2 F I 1 y k X ( t , y ˜ k ) < 1 1 ε ( t , x , s , y ) + μ k ( x k y k )
for k = 1 , 2 , , d .
Theorem 5: Suppose that p ( u ; t , x ) and q ( u ; t , x ) are rapidly decreasing functions of space-time random fields X ( t , x ) and Y ( t , x ) such that the partial integral terms are 0, ( t , x ) R + × R d . Then the Jeffreys divergence J D P ( t , x ) , Q ( t , x ) satisfies generalized De Bruijn identities
t J D P ( t , x ) , Q ( t , x ) = 1 2 F D b 0 ( 1 ) , b 0 ( 2 ) P ( t , x ) Q ( t , x ) R 0 P ( t , x ) Q ( t , x ) x k J D P ( t , x ) , Q ( t , x ) = 1 2 F D b k ( 1 ) , b k ( 2 ) P ( t , x ) Q ( t , x ) R k P ( t , x ) Q ( t , x ) k = 1 , 2 , , d
where a 0 , b 0 , a k , b k are the forms in (56) and (57), and
R 0 P ( t , x ) Q ( t , x ) = R 1 2 u u 2 b 0 ( 1 ) b 0 ( 2 ) u a 0 ( 1 ) a 0 ( 2 ) p + q d u R k P ( t , x ) Q ( t , x ) = R 1 2 u u 2 b k ( 1 ) b k ( 2 ) u a k ( 1 ) a k ( 2 ) p + q d u k = 1 , 2 , , d
here we omit ( u ; t , x ) in the integrals.
Proof: Notice that
J D P ( t , x ) , Q ( t , x ) = K L ( P ( t , x ) Q ( t , x ) ) + K L ( Q ( t , x ) P ( t , x ) ) = R p log p q d u + R q log q p d u = R p log p q + q log q p d u d u
where p : = p ( u ; t , x ) , q : = q ( u ; t , x ) and we omit the variable ( u ; t , x ) . Then
t J D P ( t , x ) , Q ( t , x ) = R t p log p q + q t p q + t q log q p + p t q p d u = R t p log p q + t p p q t q + t q log q p + t q q p t p d u = R log p q q p t p + log q p p q t q d u = R log p q q p 1 2 u u 2 b 0 ( 1 ) p u a 0 ( 1 ) p d u + R log q p p q 1 2 u u 2 b 0 ( 2 ) q u a 0 ( 2 ) q d u = R 1 2 u b 0 ( 1 ) p a 0 ( 1 ) p q p u p q u q p d u R 1 2 u b 0 ( 2 ) q a 0 ( 2 ) q p q u q p u p q d u = R 1 2 u b 0 ( 1 ) p a 0 ( 1 ) p 1 p q u p q p u q p d u R 1 2 u b 0 ( 2 ) q a 0 ( 2 ) q 1 q p u q p q u p q d u = R 1 2 u b 0 ( 1 ) p a 0 ( 1 ) p 1 p 1 2 u b 0 ( 2 ) q a 0 ( 2 ) q 1 q u log p u log q p + q d u = R 1 2 b 0 ( 1 ) u log p 1 2 b 0 ( 2 ) u log q + 1 2 u b 0 ( 1 ) b 0 ( 2 ) a 0 ( 1 ) a 0 ( 2 ) × u log p u log q p + q d u = R 1 2 b 0 ( 1 ) u log p 1 2 b 0 ( 2 ) u log q u log p u log q p + q d u R 1 2 u b 0 ( 1 ) b 0 ( 2 ) a 0 ( 1 ) a 0 ( 2 ) u log p u log q p + q d u = 1 2 F D b 0 ( 1 ) , b 0 ( 2 ) P ( t , x ) Q ( t , x ) R 0 P ( t , x ) Q ( t , x )
where
F D b 0 ( 1 ) , b 0 ( 2 ) P ( t , x ) Q ( t , x ) = R b 0 ( 1 ) u log p b 0 ( 2 ) u log q u log p u log q p + q d u
and
R 0 P ( t , x ) Q ( t , x ) = R 1 2 u u 2 b 0 ( 1 ) b 0 ( 2 ) u a 0 ( 1 ) a 0 ( 2 ) p + q d u
Similarly, for k = 1 , 2 , , d , the generalized De Bruijn identities about the space variable x k is
x k J D P ( t , x ) , Q ( t , x ) = 1 2 F D b k ( 1 ) , b k ( 2 ) P ( t , x ) Q ( t , x ) R k ( P ( t , x ) , Q ( t , x ) )
where
R k P ( t , x ) Q ( t , x ) = R 1 2 u u 2 b k ( 1 ) b k ( 2 ) u a k ( 1 ) a k ( 2 ) p + q d u
then we get the conclusion.

4. Three Fokker-Planck Random Fields and Their Corresponding Information Measures

In this section, we list three types Fokker-Planck equations, obtain their density functions and the corresponding information measures, the Jeffreys divergence, generalized Fisher information and Fisher divergence. Starting from these quantities, we get two results illustrated in Theorem 4 and Theorem 5. On the one hand, we obtain the quotient of Jeffreys divergence and the squared of space-time variation on the same Fokker-Planck space-time random field at different space-time points, comparing with the generalized Fisher information. On the other hand, we obtain the De Burijn identities on Jeffreys divergence and generalized Fisher divergence from the Fokker-Planck equations on space-time random field at the same space-time position under the same type and different parameters.

4.1. A Trivial Equation

If we let
a 0 ( u ; t , x ) = a 0 ( t , x ) b 0 ( u ; t , x ) = b 0 ( t , x ) > 0 a k ( u ; t , x ) = a k ( t , x ) b k ( u ; t , x ) = b k ( t , x ) > 0 k = 1 , 2 , , d
are continuous derivable functions independent of u and p ( u ; 0 , x ) = δ u u 0 ( x ) is the initial density, the Fokker-Planck equations are simple parabolic equations and the solution can be obtained by Fourier transform
p ( u ; t , x ) = 1 2 π 0 t b 0 ( s , x ) d s e u u 0 ( x ) 0 t a 0 ( s , x ) d s 2 2 0 t b 0 ( s , x ) d s p ( u ; t , x ) = 1 2 π 0 x k b k ( t , x ) d x k e u u 0 ( x ) 0 x k a k ( t , x ) d x k 2 2 0 x k b k ( t , x ) d x k
In order to write the above results into a unified form, we suppose that
0 t a 0 ( s , x ) d s = 0 x k a k ( t , x ) d x k 0 t b 0 ( s , x ) d s = 0 x k b k ( t , x ) d x k
hold for k = 1 , 2 , , d , we can get a unified formula. This reminds us that we need to find functions α ( t , x ) and β ( t , x ) > 0 subject to the total differential
d α ( t , x ) = a 0 d t + a 1 d x 1 + + a d d x d d β ( t , x ) = b 0 d t + b 1 d x 1 + + b d d x d
i.e. a 0 , a 1 , , a d , b 0 , b 1 , , b d correspond to partial derivatives of α ( t , x ) and β ( t , x ) with respect to space-time variables, respectively. In this way, we can get the probability density function
p ( u ; t , x ) = 1 2 π β ( t , x ) e u u 0 ( x ) α ( t , x ) 2 2 β ( t , x )
Actually, there are many examples whose Fokker-Planck equations fit this form. Let B ( t , x ) is ( 1 + d , 1 ) Brownian sheet. That is, a centered continuous Gaussian process which is indexed by ( 1 + d ) real, positive parameters and takes its values in R [52,53]. Moreover, its covariance structure is given by
E B ( t , x ) B ( s , y ) = t s k = 1 d x k y k
for ( t , x 1 , x 2 , , x d ) , ( s , y 1 , y 2 , , y d ) R + × R + d , where · · represents the minimum of two numbers. We can easily get
E B 2 ( t , x ) = prod ( t , x )
where prod ( t , x ) = t x 1 x 2 x d is the coordinate product of ( t , x ) and the density function is formed as
p ( 1 ) ( u ; t , x ) = 1 2 π prod ( t , x ) e u 2 2 prod ( t , x )
Moreover, the Fokker-Planck equations are
t p ( 1 ) ( u ; t , x ) = prod ( x ) 2 2 u 2 p ( 1 ) ( u , t , x ) x k p ( 1 ) ( u ; t , x ) = prod ( t , x ) 2 x k 2 u 2 p ( 1 ) ( u , t , x ) k = 1 , 2 , , d
with the initial condition p ( u ; t , x ) = δ ( u ) as prod ( t , x ) = 0 .
Following the construct idea of Brownian bridge on Brownian motion[53], we name
B * ( t , x ) = B ( t , x ) prod ( t , x ) B ( 1 , 1 , , 1 )
Brownian sheet bridge on cube ( t , x ) [ 0 , 1 ] × [ 0 , 1 ] d where B ( t , x ) is the Brownian sheet. Obviously, B * ( t , x ) is Gaussian, E B * ( t , x ) = 0 and
E B * ( t , x ) B * ( s , y ) = E B ( t , x ) B ( s , y ) prod ( t , x ) prod ( s , y )
we can get
E B 2 ( t , x ) = prod ( t , x ) 1 prod ( t , x )
and density function of B * ( t , x ) is
p ( 2 ) ( u ; t , x ) = 1 2 π prod ( t , x ) 1 prod ( t , x ) e u 2 2 prod ( t , x ) 1 prod ( t , x )
On the other hand, the Fokker-Planck equations are
t p ( 2 ) ( u ; t , x ) = prod ( x ) 1 2 prod ( t , x ) 2 2 u 2 p ( 2 ) ( u , t , x ) x k p ( 2 ) ( u ; t , x ) = prod ( t , x ) 2 x k 1 2 prod ( t , x ) 2 u 2 p ( 2 ) ( u , t , x ) k = 1 , 2 , , d
with the initial condition p ( u ; t , x ) = δ ( u ) as prod ( t , x ) = 0 and we get the solution (97).
Now, we have density functions (92) and (97) to get their respective Jeffreys divergence and the generalized De Burijn identities. Actually, we can easily get the Jeffreys divergence of (89) at different space-time points is formed as
J D P ( t , x ) , P ( s , y ) = α ( t , x ) α ( s , y ) 2 + β ( s , y ) 2 β ( t , x ) + α ( t , x ) α ( s , y ) 2 + β ( t , x ) 2 β ( s , y ) 1
and the Fisher divergence of P ( 1 ) and P ( 2 ) at the same space-time points
F D b k ( 1 ) , b k ( 2 ) P ( 1 ) ( t , x ) P ( 2 ) ( t , x ) = 1 β 1 2 ( t , x ) β 2 2 ( t , x ) { α 1 ( t , x ) α 2 ( t , x ) 2 b k ( 2 ) β 1 2 ( t , x ) + b k ( 1 ) β 2 2 ( t , x ) + β 1 ( t , x ) β 2 ( t , x ) β 1 ( t , x ) + β 2 ( t , x ) b k ( 2 ) β 1 ( t , x ) b k ( 1 ) β 2 ( t , x ) }
here k = 0 , 1 , , d .
Bring the density function of Brownian sheet into (99), we can get the Jeffreys divergence of Brownian sheet at different space-time points
J D P ( 1 ) ( t , x ) , P ( 1 ) ( s , y ) = prod ( s , y ) 2 prod ( t , x ) + prod ( t , x ) 2 prod ( s , y ) 1
and the generalized Fisher information on space-time variables are
F I 1 ( t ) P ( 1 ) ( t , x ) = 1 2 t 2 F I 1 ( x k ) P ( 1 ) ( t , x ) = 1 2 x k 2
k = 1 , 2 , , d .
Then we can get quotients of the squared difference between Jeffreys divergence and space-time variables
J D P ( 1 ) ( t , x ) , P ( 1 ) ( s , x ) | t s | 2 = 1 2 s t J D P ( 1 ) ( t , x ˜ k ) , P ( 1 ) ( t , y ˜ k ) | x k y k | 2 = 1 2 x k y k
and then we can get the relation between quotients and generalized Fisher information
J D P ( 1 ) ( t , x ) , P ( 1 ) ( s , x ) | t s | 2 F I 1 ( t ) P ( 1 ) ( t , x ) = t s J D P ( 1 ) ( t , x ˜ k ) , P ( 1 ) ( t , y ˜ k ) | x k y k | 2 F I 1 ( x k ) P ( 1 ) ( t , x ) = x k y k
for k = 1 , 2 , , d . If we consider the approximation of spacetime points ( t , x ) and ( s , y ) , the final result (104) satisfies the conclusion of Theorem 4.
Similarly, we can get the Jeffreys divergence of Brownian sheet bridge at different space-time points
J D P ( 2 ) ( t , x ) , P ( 2 ) ( s , y ) = prod ( s , y ) 1 prod ( s , y ) 2 prod ( t , x ) 1 prod ( t , x ) + prod ( t , x ) 1 prod ( t , x ) 2 prod ( s , y ) 1 prod ( s , y ) 1
and the generalized Fisher information on space-time variables
F I 1 ( t ) P ( 2 ) ( t , x ) = 1 2 prod ( t , x ) 2 2 t 2 1 prod ( t , x ) 2 F I 1 ( x k ) P ( 2 ) ( t , x ) = 1 2 prod ( t , x ) 2 2 x k 2 1 prod ( t , x ) 2
k = 1 , 2 , , d . Further, we can easily get the quotients of the squared difference between Jeffreys divergence and space-time variables
J D P ( 2 ) ( t , x ) , P ( 2 ) ( s , x ) | t s | 2 = 1 prod ( x ) ( s + t ) 2 2 s t 1 prod ( s , x ) 1 prod ( t , x ) J D P ( 2 ) ( t , x ˜ k ) , P ( 2 ) ( t , y ˜ k ) | x k y k | 2 = 1 2 x k y k 1 prod ( t , x ) 1 prod ( t , y ) 1 prod ( t , x ˜ k ) x k x k + y k 2
and then we can get the relation between quotients and generalized Fisher information
J D P ( 2 ) ( t , x ) , P ( 2 ) ( s , x ) | t s | 2 F I 1 ( t ) P ( 2 ) ( t , x ) = t 1 prod ( t , x ) 1 prod ( x ) ( s + t ) 2 s 1 prod ( s , x ) 1 2 prod ( t , x ) 2 J D P ( 2 ) ( t , x ˜ k ) , P ( 2 ) ( t , y ˜ k ) | x k y k | 2 F I 1 ( x k ) P ( 2 ) ( t , x ) = x k 1 prod ( t , x ˜ k ) y k 1 prod ( t , y ˜ k ) 1 2 prod ( t , x ) 2 1 prod ( t , x ˜ k ) x k x k + y k 2
for k = 1 , 2 , , d . Without loss of generality, the result (108) also satisfies Theorem 4.
Next, we consider the Jeffreys divergence on (92) and (97) at the same space-time points. Notice that the bounded domain of the Brownian sheet bridge density function, we consider only the space-time area ( t , x ) [ 0 , 1 ] × [ 0 , 1 ] d .
We can easily get the Jeffreys divergence
J D P ( 1 ) ( t , x ) , P ( 2 ) ( t , x ) = 1 prod ( t , x ) 2 + 1 2 1 prod ( t , x ) 1
and the Fisher divergence from (100)
F D b 0 ( 1 ) , b 0 ( 2 ) P ( 1 ) ( t , x ) P ( 2 ) ( t , x ) = prod ( x ) prod ( x ) 1 prod ( t , x ) 2 F D b k ( 1 ) , b k ( 2 ) P ( 1 ) ( t , x ) P ( 2 ) ( t , x ) = prod ( t , x ) x k prod ( t , x ) x k 1 prod ( t , x ) 2
with the remainder terms R 0 P ( 1 ) ( t , x ) P ( 2 ) ( t , x ) = R k P ( 1 ) ( t , x ) P ( 2 ) ( t , x ) = 0 , k = 1 , 2 , , d . Furthermore, we can get the generalized De Bruijn identities
t J D P ( 1 ) ( t , x ) , P ( 2 ) ( t , x ) = 1 2 F D b 0 ( 1 ) , b 0 ( 2 ) P ( 1 ) ( t , x ) P ( 2 ) ( t , x ) x k J D P ( 1 ) ( t , x ) , P ( 2 ) ( t , x ) = 1 2 F D b k ( 1 ) , b k ( 2 ) P ( 1 ) ( t , x ) P ( 2 ) ( t , x ) k = 1 , 2 , , d

4.2. An Nontrivial Equation

If we let
a k ( u ; t , x ) = 0 b k ( u ; t , x ) = C 1 ( t , x ) C 2 2 ( t , x ) 4 C 4 ( t , x ) u 2 > 0 k = 0 , 1 , 2 , , d
are continuous derivable functions and
v ( u ; t , x ) = 1 C 2 ( t , x ) log C 1 ( t , x ) C 2 2 ( t , x ) 4 C 3 ( t , x ) C 4 ( t , x ) u 2
is the transformation. Then the Fokker-Planck equations can be rewrite as
p t = C 1 ( t , x ) 2 2 p v 2 + C 2 ( t , x ) p v + C 1 ( t , x ) C 2 2 ( t , x ) 4 p
Using Fourier transform, we can get the solution
p ( v ; t , x ) = e 1 4 0 t C 1 ( s , x ) C 2 2 ( s , x ) d s 2 π 0 t C 1 ( s , x ) d s e v v 0 ( x ) + 0 t C 2 ( s , x ) d s 2 2 0 t C 1 ( s , x ) d s p ( u ; t , x ) = e 1 4 0 x k C 1 ( t , x ) C 2 2 ( t , x ) d x k 2 π 0 x k C 1 ( t , x ) d x k e v v 0 ( x ) + 0 x k C 2 ( t , x ) d x k 2 2 0 x k C 1 ( t , x ) d x k k = 1 , 2 , , d
where v 0 ( x ) = 1 C 2 ( t , x ) log C 1 ( t , x ) C 2 2 ( t , x ) 4 C 3 ( t , x ) C 4 ( t , x ) u 0 ( x ) 2 is the logarithm of initial value. Then we need to suppose that there are functions subject to
0 t C 1 ( s , x ) d t = 0 x k C 1 ( t , x ) d x k 0 t C 2 ( s , x ) d s = 0 x k C 2 ( t , x ) d x k 0 t C 1 ( s , x ) C 2 2 ( s , x ) d s = 0 x k C 1 ( t , x ) C 2 2 ( t , x ) d x k k = 1 , 2 , , d
and we take v = v ( u ; t , x ) into p ( v ; t , x ) , we can get a unified density function formula.
As a example, we let
a k ( u ; t , x ) 0 b k ( u ; t , x ) = b k ( t , x ) u 2 > 0 k = 0 , 1 , 2 , , d
and u 0 ( x ) = 1 , the Fokker-Planck equations are
p ( 3 ) t = b 0 ( t , x ) 2 u 2 2 p ( 3 ) u 2 + 2 b 0 ( t , x ) u p ( 3 ) u + b 0 ( t , x ) p ( 3 ) p ( 3 ) x k = b k ( t , x ) 2 u 2 2 p ( 3 ) u 2 + 2 b k ( t , x ) u p ( 3 ) u + b k ( t , x ) p ( 3 ) k = 1 , 2 , , d
and v = log u i.e. u = e v , we can obtain
p ( 3 ) t = b 0 ( t , x ) 2 2 p ( 3 ) v 2 + 3 b 0 ( t , x ) 2 p ( 3 ) v + b 0 ( t , x ) p ( 3 ) p ( 3 ) x k = b k ( t , x ) 2 2 p ( 3 ) v 2 + 3 b k ( t , x ) 2 p ( 3 ) v + b k ( t , x ) p ( 3 ) k = 1 , 2 , , d
with the solution
p ( 3 ) ( v ; t , x ) = e 0 t b 0 ( s , x ) d s 2 π 0 t b 0 ( s , x ) d s e v + 3 2 0 t b 0 ( s , x ) d s 2 2 0 t b 0 ( s , x ) d s p ( 3 ) ( v ; t , x ) = e 0 x k b k ( t , x ) d x k 2 π 0 x k b k ( t , x ) d x k e v + 3 2 0 x k b k ( t , x ) d x k 2 2 0 x k b k ( t , x ) d x k k = 1 , 2 , , d
and
p ( 3 ) ( u ; t , x ) = e 0 t b 0 ( s , x ) d s 2 π 0 t b 0 ( s , x ) d s e log u + 3 2 0 t b 0 ( s , x ) d s 2 2 0 t b 0 ( s , x ) d s p ( 3 ) ( u ; t , x ) = e 0 x k b k ( t , x ) d x k 2 π 0 x k b k ( t , x ) d x k e log u + 3 2 0 x k b k ( t , x ) d x k 2 2 0 x k b k ( t , x ) d x k k = 1 , 2 , , d
Then we suppose that there is a function β ( t , x ) subject to the total differential
d β ( t , x ) = b 0 d t + b 1 d x 1 + + b d d x d
and the density function is
p ( 3 ) ( u ; t , x ) = e β ( t , x ) 2 π β ( t , x ) e log u + 3 2 β ( t , x ) 2 2 β ( t , x )
Similar to the idea of Section 4.1, if we consider different β 3 ( t , x ) and β 4 ( t , x ) in (123), we can get density functions p ( 3 ) ( u ; t , x ) and p ( 4 ) ( u ; t , x ) , then we can get the Jeffreys divergence
J D P ( 3 ) ( t , x ) , P ( 3 ) ( s , y ) = β 3 ( t , x ) + β 3 ( s , t ) + 4 8 β 3 ( t , x ) β 3 ( s , y ) β 3 ( t , x ) β 3 ( s , y ) 2
and generalized Fisher information
F I 1 ( t ) P ( 3 ) ( t , x ) = β 3 ( t , x ) + 2 4 β 3 2 ( t , x ) b 0 ( 3 ) ( t , x ) 2 F I 1 ( x k ) P ( 3 ) ( t , x ) = β 3 ( t , x ) + 2 4 β 3 2 ( t , x ) b k ( 3 ) ( t , x ) 2
then the quotients
J D P ( 3 ) ( t , x ) , P ( 3 ) ( s , x ) | t s | 2 = β 3 ( t , x ) + β 3 ( s , x ) + 4 8 β 3 ( t , x ) β 3 ( s , y ) β 3 ( t , x ) β 3 ( s , y ) t s 2 J D P ( 3 ) ( t , x ˜ k ) , P ( 3 ) ( t , y ˜ k ) | x k y k | 2 = β 3 ( t , x ˜ k ) + β 3 ( t , y ˜ k ) + 4 8 β 3 ( t , x ˜ k ) β 3 ( t , y ˜ k ) β 3 ( t , x ˜ k ) β 3 ( t , x ˜ k ) x k y k 2
and we can easily get the relation
J D P ( 3 ) ( t , x ) , P ( 3 ) ( s , y ) | t s | 2 F I 1 ( t ) P ( 3 ) ( t , x ) = β 3 ( t , x ) β 3 ( s , x ) β 3 ( t , x ) + β 3 ( s , x ) + 4 2 β 3 ( t , x ) + 2 β 3 ( t , x ) β 3 ( s , y ) b 0 ( 3 ) ( t , x ) t s 2 J D P ( 3 ) ( t , x ˜ k ) , P ( 3 ) ( t , y ˜ k ) | x k y k | 2 F I 1 ( x k ) P ( 3 ) ( t , x ˜ k ) = β 3 ( t , x ˜ k ) β 3 ( t , y ˜ k ) β 3 ( t , x ˜ k ) + β 3 ( t , y ˜ k ) + 4 2 β 3 ( t , x ˜ k ) + 2 β 3 ( t , x ˜ k ) β 3 ( t , x ˜ k ) b k ( 3 ) ( t , x ) x k y k 2
for k = 1 , 2 , , d . Without loss of generality, the result (127) also corroborates Theorem 4.
Furthermore, if we consider different β 3 ( t , x ) and β 4 ( t , x ) in (123), we can get density functions p ( 3 ) ( u ; t , x ) and p ( 4 ) ( u ; t , x ) , then the generalized Fisher divergence at the same space-time points is
F D b k ( 3 ) , b k ( 4 ) P ( 3 ) ( t , x ) P ( 4 ) ( t , x ) = β 3 ( t , x ) β 4 ( t , x ) 4 β 3 2 ( t , x ) β 4 2 ( s , y ) × b k ( 4 ) ( t , x ) β 3 ( t , x ) β 3 2 ( t , x ) 2 β 4 2 ( s , y ) b k ( 3 ) ( t , x ) β 4 ( t , x ) β 4 2 ( s , y ) 2 β 3 2 ( t , x ) × 4 β 3 ( t , x ) + 4 β 4 ( t , x ) 3 β 3 ( t , x ) β 4 ( t , x ) b k ( 4 ) ( t , x ) β 3 ( t , x ) b k ( 3 ) ( t , x ) β 4 ( t , x )
with the remainder terms R k P ( 3 ) ( t , x ) P ( 4 ) ( t , x ) = 2 b k ( 3 ) ( t , x ) b k ( 4 ) ( t , x ) , k = 0 , 1 , 2 , , d . Then the generalized De Bruijn identities are
t J D P ( 3 ) ( t , x ) , P ( 4 ) ( t , x ) = 1 2 F D b 0 ( 3 ) , b 0 ( 4 ) P ( 3 ) ( t , x ) P ( 4 ) ( t , x ) 2 b 0 ( 3 ) ( t , x ) b 0 ( 4 ) ( t , x ) x k J D P ( 3 ) ( t , x ) , P ( 4 ) ( t , x ) = 1 2 F D b k ( 3 ) , b k ( 4 ) P ( 3 ) ( t , x ) P ( 4 ) ( t , x ) 2 b k ( 3 ) ( t , x ) b k ( 4 ) ( t , x ) k = 1 , 2 , , d

4.3. An Interesting Equation

If we let
a k ( u ) = 3 D 1 2 D 3 8 u D 6 b k ( u ) = D 1 2 D 5 2 1 D 3 4 D 5 2 u D 6 2 k = 0 , 1 , 2 , , d
are continuous derivable functions and
u = 2 D 5 D 3 sin D 3 2 v + D 4 + D 6
where D 1 to D 6 are nonnegative functions independent of u and we omit ( t , x ) .
Thus, we can rewrite the Fokker-Planck equations as
p t = D 1 2 2 2 p v 2 + D 1 2 D 3 8 p p x k = D 1 2 2 2 p v 2 + D 1 2 D 3 8 p k = 1 , 2 , , d
with the solution
p ( v ; t , x ) = e 1 8 0 t D 1 2 D 3 d s 2 π 0 t D 1 2 d s e v v 0 2 2 0 t D 1 2 d s
Recall the Lemma 2, we obtain
I u p ( u ; t , x ) d u = D 5 e 1 8 0 t D 1 2 D 3 d s 2 π 0 t D 1 2 d s R cos D 3 2 u ˜ + D 4 e u ˜ u ˜ 0 2 2 0 t D 1 2 d s d u ˜ = 1
and use the result
1 2 π H R cos ω x + φ e x G 2 2 H d x = e ω 2 H 2 cos ω G + φ
the integral (134) can be write as
D 5 e 1 8 0 t D 1 2 D 3 d s D 3 8 0 t D 1 2 d s cos D 3 2 u ˜ 0 + D 4 = 1
where u ˜ 0 = 2 D 3 arcsin D 3 2 D 5 u 0 D 6 .
Here we give a simple example. Let u 1 , 1 ,
a k ( u ; t , x ) = 3 2 b k ( t , x ) u b k ( u ; t , x ) = b k ( t , x ) 1 u 2 k = 0 , 1 , 2 , , d
where b k ( t , x ) > 0 and we can obtain the Fokker-Planck equations formed as
p t = b 0 ( t , x ) 2 2 u 2 1 u 2 p 3 b 0 ( t , x ) 2 u u p p x k = b k ( t , x ) 2 2 u 2 1 u 2 p 3 b k ( t , x ) 2 u u p k = 1 , 2 , , d
with initial value u 0 ( x ) = 0 .
Using the formula u = sin v , we can rewrite the Fokker-Planck equations
p t = b 0 ( t , x ) 2 2 p v 2 + b 0 ( t , x ) 2 p p x k = b k ( t , x ) 2 2 p v 2 + b k ( t , x ) 2 p k = 1 , 2 , , d
with the solution
p ( v ; t , x ) = e 1 2 0 t b 0 ( s , x ) d s 2 π 0 t b 0 ( s , x ) d s e v 2 2 0 t b 0 ( s , x ) d s p ( v ; t , x ) = e 1 2 0 x k b k ( t , x ) d x k 2 π 0 x k b k ( t , x ) d x k e v 2 2 0 x k b k ( t , x ) d x k
and the variable substitution is u = sin v .
Similarly, we suppose that there is a function β ( t , x ) subject to the total differential
d β ( t , x ) = b 0 d t + b 1 d x 1 + + b d d x d
then the density is
p ( v ; t , x ) = e 1 2 β ( t , x ) 2 π β ( t , x ) e v 2 2 β ( t , x ) u = sin v
From density function (142), if we consider different β 5 ( t , x ) and β 6 ( t , x ) in (142), we can get density functions p ( 5 ) ( u ; t , x ) and p ( 6 ) ( u ; t , x ) , then we can get the Jeffreys divergence and generalized Fisher information
J D P ( 5 ) ( t , x ) , P ( 5 ) ( s , y ) = 1 β 5 ( t , x ) β 5 ( s , y ) 2 β 5 ( t , x ) β 5 ( s , y ) β 5 ( t , x ) β 5 ( s , y ) 2
and
F I 1 ( t ) P ( 5 ) ( t , x ) = 1 2 β 5 ( t , x ) 2 β 5 2 ( t , x ) b 0 ( 5 ) ( t , x ) 2 F I 1 ( x k ) P ( 5 ) ( t , x ) = 1 2 β 5 ( t , x ) 2 β 5 2 ( t , x ) b k ( 5 ) ( t , x ) 2
then the quotients
J D P ( 5 ) ( t , x ) , P ( 5 ) ( s , x ) | t s | 2 = 1 β 5 ( t , x ) β 5 ( s , x ) 2 β 5 ( t , x ) β 5 ( s , x ) β 5 ( t , x ) β 5 ( s , x ) t s 2 J D P ( 5 ) ( t , x ˜ k ) , P ( 5 ) ( t , y ˜ k ) | x ˜ k y ˜ k | 2 = 1 β 5 ( t , x ˜ k ) β 5 ( t , y ˜ k ) 2 β 5 ( t , x ˜ k ) β 5 ( t , y ˜ k ) β 5 ( t , x ˜ k ) β 5 ( t , y ˜ k ) x ˜ k y ˜ k 2
Obviously, we can easily get
J D P ( 5 ) ( t , x ) , P ( 5 ) ( s , x ) | t s | 2 F I 1 ( t ) P ( 5 ) ( t , x ) = 1 β 5 ( t , x ) β 5 ( s , x ) 1 2 β 5 ( t , x ) β 5 ( t , x ) β 5 ( s , x ) β 5 ( t , x ) β 5 ( s , x ) b 0 ( 5 ) ( t , x ) t s 2 J D P ( 5 ) ( t , x ˜ k ) , P ( 5 ) ( t , y ˜ k ) | x ˜ k y ˜ k | 2 F I 1 ( x k ) P ( 5 ) ( t , x ) = 1 β 5 ( t , x ˜ k ) β 5 ( t , y ˜ k ) 1 2 β 5 ( t , x ˜ k ) β 5 ( t , x ˜ k ) β 5 ( t , y ˜ k ) β 5 ( t , x ˜ k ) β 5 ( t , y ˜ k ) b k ( 5 ) ( t , x ) x ˜ k y ˜ k 2
for k = 1 , 2 , , d . Without loss of generality, the result (146) corroborates Theorem 4.
Furthermore, if we consider different β 5 ( t , x ) and β 6 ( t , x ) in (142), we can get density functions p ( 5 ) ( u ; t , x ) and p ( 6 ) ( u ; t , x ) , then the generalized Fisher divergence at the same space-time points is
F D b k ( 5 ) , b k ( 6 ) P ( 5 ) ( t , x ) P ( 6 ) ( t , x ) = β 5 ( t , x ) β 6 ( t , x ) b k ( 6 ) β 5 ( t , x ) b k ( 5 ) β 6 ( t , x ) 2 β 5 2 ( t , x ) β 6 2 ( t , x ) β 5 ( t , x ) β 5 2 ( t , x ) + β 6 ( t , x ) β 6 2 ( t , x )
with the remainder terms R k P ( 5 ) ( t , x ) P ( 6 ) ( t , x ) = b k ( 5 ) ( t , x ) b k ( 6 ) ( t , x ) , k = 0 , 1 , 2 , , d . Then the generalized De Bruijn identities are
t J D P ( 5 ) ( t , x ) , P ( 6 ) ( t , x ) = 1 2 F D b 0 ( 5 ) , b 0 ( 6 ) P ( 5 ) ( t , x ) P ( 6 ) ( t , x ) b 0 ( 5 ) ( t , x ) b 0 ( 6 ) ( t , x ) x k J D P ( 5 ) ( t , x ) , P ( 6 ) ( t , x ) = 1 2 F D b k ( 5 ) , b k ( 6 ) P ( 5 ) ( t , x ) P ( 6 ) ( t , x ) b k ( 5 ) ( t , x ) b k ( 6 ) ( t , x ) k = 1 , 2 , , d

5. Conclusions

In this paper, we generalize the classical definitions of entropy, divergence and Fisher information, obtain these measures on space-time random field. In addition to that, we also get the Fokker-Planck equations (55) for space-time random field and obtain the density functions. Moreover, we obtain the Jeffreys divergence of a space-time random field at different space-time positions, and we get the approximation of the ratio of Jeffreys divergence to the square of space-time coordinate difference to the generalized Fisher information(68). Further, we use the Jeffreys divergence on two space-time random fields from same type but different parameters Fokker-Planck equations, to obtain generalized De Bruijn identities (77), and get the relation between Jeffreys divergence of space-time random field and generalized Fisher divergence. Finally, we give three examples of Fokker-Planck equations, with their solutions, calculate the corresponding Jeffreys divergence, generalized Fisher information and Fisher divergence and obtain the De Bruijn identities. These results encourage further research into the entropy divergence of space-time random fields, which advances the pertinent fields of information entropy, Fisher information, and De Bruijn identities.

Acknowledgments

The author would like to thank Prof. Pingyi Fan for providing relevant references and helpful discussions on topics related to this work.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
KL Kullback-Leibler divergence
FI Fisher information
CFI Cross Fisher information
FD Fisher divergence
sFD symmetric Fisher divergence
JD Jeffreys divergence

References

  1. Risken, H. The Fokker-Planck equation: methods of solution and applications; Springer Heidelberg: Berlin, 1984. [Google Scholar]
  2. Stam, A. J. Some inequalities satisfied by the quantities of information of Fisher and Shannon. Inf. Control 1959, 2, 101–112. [Google Scholar] [CrossRef]
  3. Barron, A. R. Entropy and the central limit theorem. Ann. Probab. 1986, 14, 336–342. [Google Scholar] [CrossRef]
  4. Johnson, O. Information theory and the central limit theorem; Imperial college Press: London, U.K., 2004. [Google Scholar]
  5. Guo, D. Relative entropy and score function: New information estimation relationships through arbitrary additive perturbation. Proc. IEEE Int. Symp. Inf. Theory, Seoul, South Korea, Jun./Jul. 2009; pp. 814–818.
  6. Toranzo, I. V.; Zozor, S.; Brossier, J-M. Generalization of the De Bruijn Identity to General ϕ-Entropies and ϕ-Fisher Informations. IEEE Trans. Inform. Theory 2018, 64, 6743–6758. [Google Scholar] [CrossRef]
  7. Kharazmi, O.; Balakrishnan, N. Cumulative residual and relative cumulative residual Fisher information and their properties. IEEE Trans. Inform. Theory 2021, 67, 6306–6312. [Google Scholar] [CrossRef]
  8. Kolmogorov, A. N. The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers. Dokl. Akad. Nauk SSSR 1941, 30, 299–303. [Google Scholar]
  9. Kolmogorov, A. N. On the degeneration of isotropic turbulence in an incompressible viscous flu. Dokl. Akad. Nauk SSSR 1941, 31, 538–542. [Google Scholar]
  10. Kolmogorov, A. N. Dissipation of energy in isotropic turbulence. Dokl. Akad. Nauk SSSR 1941, 32, 19–21. [Google Scholar]
  11. Yaglom, A. M. Some classes of random fields in n-dimensional space, related to stationary random processes. Theory Probab. Its Appl. 1957, 2, 273–320. [Google Scholar] [CrossRef]
  12. Yaglom, A. M. Correlation theory of stationary and related Random functions. Volume I: basic results; Springer-Verlag: New York, 1987. [Google Scholar]
  13. Yaglom, A. M. Correlation theory of stationary and related random functions. Volume II: supplementary notes and references; Springer-Velag: Berlin, 1987. [Google Scholar]
  14. Bowditch, A.; Sun, R. The two-dimensional continuum random field Ising model. Ann. Probab. 2022, 50, 419–454. [Google Scholar] [CrossRef]
  15. Bailleul, I.; Catellier, R.; Delarue, F. Propagation of chaos for mean field rough differential equations. Ann. Probab. 2021, 49, 944–996. [Google Scholar] [CrossRef]
  16. Wu, L.; Samorodnitsky, G. Regularly varying random fields. Stoch. Process Their. Appl. 2020, 130, 4470–4492. [Google Scholar] [CrossRef]
  17. Koch, E.; Dombry, C.; Robert, C. Y. A central limit theorem for functions of stationary max-stable random fields on Rd. Stoch. Process Their. Appl. 2020, 129, 3406–3430. [Google Scholar] [CrossRef]
  18. Ye, Z. On entropy and ε-entropy of random fields. Ph.D.dissertation, Cornell University, 1989. [Google Scholar]
  19. Ye, Z.; Berger, T. A new method to estimate the critical distortion of random fields. IEEE Trans. Inform. Theory 1992, 38, 152–157. [Google Scholar] [CrossRef]
  20. Ye, Z.; Berger, T. Information Measures for Discrete Random Fields; Science Press: Beijing/New York, 1998. [Google Scholar]
  21. Ye, Z.; Yang, W. Random Field: Network Information Theory and Game Theory; Science Press: Beijing, 2023. (in Chinese) [Google Scholar]
  22. Ma, C. Stationary random fields in space and time with rational spectral densities. IEEE Trans. Inform. Theory 2007, 53, 1019–1029. [Google Scholar] [CrossRef]
  23. Hairer, M. A theory of regularity structures. Invent. Math. 2014, 198, 269–504. [Google Scholar] [CrossRef]
  24. Hairer, M. Solving the KPZ equation. Ann. Math. 2013, 178, 559–664. [Google Scholar] [CrossRef]
  25. Kremp, H.; Perkowski, N. Multidimensional SDE with distributional drift and Lévy noise. Bernoulli 2022, 28, 1757–1783. [Google Scholar] [CrossRef]
  26. Beeson, R.; Namachchivaya, N. S.; Perkowski, N. Approximation of the filter equation for multiple timescale, correlated, nonlinear systems. SIAM J. Math. Anal. 2022, 54(3), 3054–3090. [Google Scholar] [CrossRef]
  27. Song, Z.; Zhang, J. A note for estimation about average differential entropy of continuous bounded space-time random field. Chinese J. Electron 2022, 31, 793–803. [Google Scholar] [CrossRef]
  28. Kramers, H. A. Brownian motion in a field of force and the diffusion model of chemical reactions. Physica 1940, 7, 284–304. [Google Scholar] [CrossRef]
  29. Moyal, J. E. Stochastic processes and statistical physics. J R Stat Soc Series B Stat Methodol 1949, 11, 150–210. [Google Scholar] [CrossRef]
  30. Shannon, C. E. A mathematical theory of communication. Bell System Technical Journal 1948, 27, 379–423, 623–656. [Google Scholar] [CrossRef]
  31. Neeser, F. D.; Massey, J. L. Proper complex random processes with applications to information theory. IEEE Trans. Inform. Theory 1991, 39, 1293–1302. [Google Scholar] [CrossRef]
  32. Ihara, S. Information theory-for continuous systems; World Scientific: Singapore, 1993. [Google Scholar]
  33. Gray, R. M. Entropy and information theory; Springer: Boston, 2011. [Google Scholar]
  34. Bach, F. Information Theory With Kernel Methods. IEEE Trans. Inform. Theory 2023, 69, 752–775. [Google Scholar] [CrossRef]
  35. Kullback, S.; Leibler, R. A. On information and sufficiency, Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  36. Jeffreys, H. An invariant form for the prior probability in estimation problems. Proc. R. Soc. Lond. A 1946, 186, 453–461. [Google Scholar]
  37. Fuglede, B.; Topsøe, F. Jensen-Shannon divergence and Hilbert space embedding. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Chicago, IL, USA, 27 June-2 July 2004; p. 31. [Google Scholar]
  38. Rényi, A. On measures of entropy and information. Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, 1961, 1, 547–561. [Google Scholar]
  39. She, R.; Fan, P.; Liu, X.-Y.; Wang, X. Interpretable Generative Adversarial Networks With Exponential Function. IEEE Trans. Signal Process. 2021, 69, 3854–3867. [Google Scholar] [CrossRef]
  40. Liu, S.; She, R.; Zhu, Z.; Fan, P. Storage Space Allocation Strategy for Digital Data with Message Importance. Entropy 2020, 22, 591. [Google Scholar] [CrossRef]
  41. She, R.; Liu, S.; Fan, P. Attention to the Variation of Probabilistic Events: Information Processing with Message Importance Measure. Entropy 2019, 21, 439. [Google Scholar] [CrossRef]
  42. Wan, S.; Lu, J.; Fan, P.; Letaief, K.B. Information Theory in Formation Control: An Error Analysis to Multi-Robot Formation. Entropy 2018, 20, 618. [Google Scholar] [CrossRef]
  43. She, R.; Liu, S.; Fan, P. Recognizing Information Feature Variation: Message Importance Transfer Measure and Its Applications in Big Data. Entropy 2018, 20, 401. [Google Scholar] [CrossRef] [PubMed]
  44. Nielsen, F. An Elementary Introduction to Information Geometry. Entropy 2020, 22, 1100. [Google Scholar] [CrossRef] [PubMed]
  45. Nielsen, F. On the Jensen–Shannon Symmetrization of Distances Relying on Abstract Means. Entropy 2019, 21, 485. [Google Scholar] [CrossRef] [PubMed]
  46. Nielsen, F.; Nock, R. Generalizing skew Jensen divergences and Bregman divergences with comparative convexity. IEEE Signal Process. Lett. 2017, 24, 1123–1127. [Google Scholar] [CrossRef]
  47. Furuichi, S.; Minculete, N. Refined Young Inequality and Its Application to Divergences. Entropy 2021, 23, 514. [Google Scholar] [CrossRef]
  48. Pinele, J.; Strapasson, J.E.; Costa, S.I. The Fisher-Rao Distance between Multivariate Normal Distributions: Special Cases, Bounds and Applications. Entropy 2020, 22, 404. [Google Scholar] [CrossRef]
  49. Reverter, F.; Oller, J.M. Computing the Rao distance for Gamma distributions. J. Comput. Appl. Math. 2003, 157, 155–167. [Google Scholar] [CrossRef]
  50. Pawula, R. F. Generalizations and extensions of the Fokker-Planck-Kolmogorov equations. IEEE Trans. Inform. Theory 1967, 13, 33–41. [Google Scholar] [CrossRef]
  51. Pawula, R. F. Approximation of the linear Boltzmann equation by the Fokker-Planck equation. Phys. Rev. 1967, 162, 186–188. [Google Scholar] [CrossRef]
  52. Khoshnevisan, D.; Shi, Z. Brownian Sheet and Capacity, Ann. Probab. 1999, 27, 3, 1135–1159. [Google Scholar] [CrossRef]
  53. Daniel, R.; Marc, Y. Continuous Martingales and Brownian Motion, 2nd. ed; Springer-Verlag: New York, 1999. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated