Preprint
Article

On Properties of Karamata Slowly Varying Functions with Remainder and Their Applications

Altmetrics

Downloads

62

Views

32

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

30 August 2024

Posted:

02 September 2024

You are already at the latest version

Alerts
Abstract
In this paper, we study the asymptotic properties of slowly varying functions of one real variable in the sense of Karamata. We establish analogs of fundamental theorems on uniform convergence and integral representation for slowly varying functions with remainder depending on the types of remainder. We also prove several important theorems on the asymptotic representation of integrals of Karamata functions. Under certain conditions, we observe a “narrowing” of classes of slowly varying functions concerning the types of remainder. At the end of the paper, we discuss the possibilities of application of slowly varying functions in the theory of stochastic branching systems. In particular, under the condition of the finiteness of the moment of the type Exlnx for the particle transformation intensity, it is established that the property of slow variation with remainder is implicitly present in the asymptotic structure of a non-critical Markov branching random system.
Keywords: 
Subject: Computer Science and Mathematics  -   Probability and Statistics

1. Introduction

The concept of regular variation, initiated by the famous Serbian mathematician Jovan Karamata in the early 1930s, is a special one-sided locally asymptotic property of functions of a real variable. It arose from the desire to logically extend the class of functions with power-law asymptotic monotonicity near some point to a class of functions with behavior akin to a power function multiplied by a coefficient changing “more slowly” than the power function. The first book in the world of mathematical literature specifically devoted to the systematic study of Karamata functions was published by the Australian mathematician E.Seneta [22] in 1976. Detailed materials related to the application of the theory of regularly varying functions in various areas of mathematics can be found in the monographs [1] and [3]. The possibilities of applying regularly varying functions in the theory of Markov branching random systems were first discussed in the work of V.Zolotarev [4].
A real-valued, positive, and measurable function L ( x ) is said to be slowly varying (SV) at infinity (in the sense of Karamata) if L ( λ x ) / L ( x ) 1 as x for an arbitrary λ R + , where R + denotes the positive semiaxis of real numbers. In what follows, we use the symbol SV to denote the class of SV-functions at infinity. It is easy to see that if L ( x ) SV , then L ( 1 / x ) is slowly varying at zero. Thus, one can define the notion of slowly varying at any finite point by shifting the origin to that point. In this regard, we will limit ourselves to considering functions from the class SV . The well-known fundamental theorem on integral representation states that any function L ( x ) SV can be represented in the form
L ( x ) = M ( x ) exp b x ε ( u ) u d u
for some b R + , where M ( x ) is a bounded measurable function defined on the set [ b , ] , so that | M ( x ) | M R + . The function ε ( x ) is continuous and infinitesimal as x , it is called the index of variation of L ( x ) . In the special case when M ( x ) c o n s t , the function L ( x ) is called normalized. One of the important characteristics of normalized functions from the class SV is the fact that if it is differentiable, then
x L ( x ) L ( x ) = ε ( x ) 0 as x ;
see ([3] p.15).
A function R ( x ) is called a regularly varying (RV) function at infinity with regularity index ρ if it can be represented as R ( x ) = x ρ L ( x ) for some L ( x ) SV . It follows from the RV-function definition that
R ( λ x ) R ( x ) λ ρ as x
for an arbitrary λ R + ; see [22]. Denote by R ρ the class of RV-functions at infinity. Then it is obvious that SV R 0 .
In this report we are interested in one important characteristic of SV-functions, indicating the degree of their change in the class SV . Thus, according to the definition, the function ω λ ( x ) : = L ( λ x ) / L ( x ) 1 is an infinitesimal value at infinity. Research in recent years has shown that the use of Karamata functions with a certain degree of smallness ω λ ( x ) allows us to improve known theorems of probability theory, in particular, to obtain facts about deep properties of stochastic branching systems of various types. In this regard, we turn to the works [10,11,12,13,14], in which practically unimprovable asymptotic estimates of the survival probabilities of a population of individuals in stochastic branching systems with a reproductive law having a finite moment of order 1 + ν for all ν ( 0 , 1 ) were found.
Write
L ( λ x ) L ( x ) 1 = ω λ ( x ) ,
where ω λ ( x ) 0 as x . The relation (1.1) points to the fact that the asymptotic properties of the function L ( x ) SV depend on the decreasing rate of ω λ ( x ) .
In what follows we use Landau symbols o, O , O * to compare two functions f ( * ) and h ( * ) at the point x 0 (finite or infinite):
f ( x ) = o h ( x ) if f ( x ) h ( x ) 0 ,
f ( x ) = O h ( x ) if f ( x ) h ( x ) A ,
f ( x ) = O * h ( x ) if f ( x ) h ( x ) A ,
as x x 0 , where A is a finite positive constant. Also f ( x ) h ( x ) means f ( x ) / h ( x ) 1 .
In the monograph ([3] pp.76–77) the functions L ( x ) SV with a remainder term of the form ω λ ( x ) = o 1 / φ ( x ) , where φ ( x ) is an infinitely large function, are considered. It is also proved there that if the function L ( x ) is continuously differentiable and the function τ ( x ) : = exp { φ ( x ) } does not decrease and τ ( x ) > 1 , then the characteristic representation
x L ( x ) L ( x ) = o 1 ln τ ( x ) as x
entails the following relation:
ω λ ( x ) ln τ ( x ) 0 as x .
Further suppose that r ( x ) is defined for x R + such that r ( x ) R + and r ( x ) 0 as x . A function L ( x ) SV is called an SV-function with remainder if it satisfies for all λ R + one of the following conditions as x ([3] p.185):
SR 1 )
ω λ ( x ) = O r ( x ) ;
SR 2 )
ω λ ( x ) c o n s t ( λ ) r ( x ) ;
SR 3 )
ω λ ( x ) = o r ( x ) .
By introducing the notation ψ ( x ) : = ln L ( x ) , the above conditions can be replaced by the following ones:
SR 1 )
ψ ( λ x ) ψ ( x ) = O r ( x ) ;
SR 2 )
ψ ( λ x ) ψ ( x ) c o n s t ( λ ) r ( x ) ;
SR 3 )
ψ ( λ x ) ψ ( x ) = o r ( x ) .
The class of SV-functions with remainder ω λ ( x ) defined by conditions SR 1 SR 3 will be denoted throughout by SV ( ω ) .
In the next section, we study the asymptotic properties of functions from the class of SV ( ω ) . We establish analogs of fundamental theorems on uniform convergence and integral representation. We also establish some important asymptotic formulas for improper integrals of RV-functions. In the third section, we briefly discuss the presence of the slow-varying property in the asymptotic structure of continuous-time stochastic Markov branching systems.

2. Main Theorems

We begin by presenting the following theorems, which are important generalizations of the fundamental theorems from ([22] Ch.1, §2) on uniform convergence and the integral representation for functions L ( x ) SV ( ω ) .
From now on, we will use Landau symbols only concerning the asymptotics as x , unless otherwise stated.
Theorem 1.
Let L ( x ) SV ( ω ) . Then for any a , b R + the conditions SR 1 SR 3 are satisfied uniformly for all λ a , b
sup | ω λ ( x ) | : λ a , b = O r ( x ) for SR 1 , O * r ( x ) for SR 2 , o r ( x ) for SR 3 .
Theorem 2.
The function L ( x ) defined for x a , R + belongs to the class SV ( ω ) if and only if for some b a , the following integral representation is allowed:
ln L ( x ) = η ( x ) + b x ε ( u ) u d u
where η ( x ) is a measurable function on the set b , such that η ( x ) C η , C η < , and ε ( x ) is a continuous function on b , such that ε ( x ) 0 as x . Moreover,
η ( x ) C η ε ( x ) = O r ( x ) for SR 1 , O * r ( x ) for SR 2 , o r ( x ) for SR 3 .
Proof of the last two theorems is based on the proof scheme of similar theorems given in [22] for the case ω λ ( x ) = O 1 / φ ( x ) with infinitely large φ ( x ) as x . Therefore, we will not dwell on the details of the discussion proof. The superiority of these results over similar theorems for functions from the class SV lies in the fact that the first theorem estimates the degrees of uniform smallness of the remainder ω λ ( x ) , and the second theorem estimates, in particular, the varying index of ε ( x ) . Thus, the nature of the varying of the functions η ( x ) and ε ( x ) depends on the degree of smallness of the remainder ω λ ( x ) as x .
Next, we will prove the following theorem on the asymptotic behavior of functions from the class SV ( ω ) .
Theorem 3.
Each function L ( x ) SV ( ω ) has a finite limit C L : = lim x L ( x ) . If the condition SR 1 is satisfied with r ( x ) = O L ( x ) / x σ , where σ R + , then
η ( x ) = C η + O * 1 / x σ and ε ( x ) = O * 1 / x σ .
Moreover, the following asymptotic representation is valid:
L ( x ) = C L + O * 1 / x σ .
Conversely, if for the function L ( x ) defined on some interval a , R + (2.3) holds, then L ( x ) SV ( ω ) with remainder term ω λ ( x ) = O * 1 / x σ .
Proof. 
In the condition of the theorem, from the corresponding statement in (2.2) it follows that ε ( x ) = O L ( x ) / x σ . Since the number σ R + is positive, then by the properties of SV-functions we obtain that the improper integral b ε ( u ) / u d u converges. Again, from (2.2) we find η ( x ) = C η + O L ( x ) / x σ as x . Collecting these facts in (2.1), we have
ln L ( ) = C η + b ε ( u ) u d u ,
which is equivalent to that lim x L ( x ) = : C L < . It follows that the remainder term
ω λ ( x ) = O * 1 / x σ as x .
Now we prove the formula (2.3). To do this, we write (2.1) in the form
L ( x ) = exp C η + O * 1 / x σ L 0 ( x ) ,
where L 0 ( x ) is the normalized SV-function associated with L ( x ) . Next we have
L 0 ( ) = b ε ( u ) u d u = : C 0 < ,
which follows from the fact that L ( ) = : C L < . Now we obtain
C 0 L 0 ( x ) = C 0 1 L 0 ( x ) C 0 = C 0 1 exp x ε ( u ) u d u .
The integral in the last formula tends to zero as the tail of a convergent integral. Then, due to the fact that 1 e u u as u 0 , we obtain the following equalities:
C 0 L 0 ( x ) = O * x 1 u 1 + σ d u = O * 1 / x σ .
Combining the last relation with formula (2.4) gives
L ( x ) = exp C η + O * 1 / x σ C 0 + O * 1 / x σ = C 0 exp C η 1 + O * 1 / x σ .
From here, denoting C L : = C 0 exp C η , we get to (2.3).
The second part of the theorem follows from formula (2.3). Indeed,
L ( λ x ) L ( x ) = C L + O * 1 / x σ C L + O * 1 / x σ = 1 + O * 1 / x σ ,
which stands for L ( x ) SV ( ω ) with remainder ω λ ( x ) = O * 1 / x σ .
The theorem is proved completely. □
Remark 1.
The assertion of Theorem 3 entails the following conclusion. If the function r ( x ) has the order of decreasing ε ( x ) = O L ( x ) / x σ for some σ R + , then the class of functions with remainders determined by the condition SR 1 coincides with the class of functions with the condition SR 2 . In this case, the function x σ ω λ ( x ) has a finite limit in explicit form as x .
We now turn to the consideration of functions from the class R ρ with the remainder. The following theorem describes their important asymptotic character.
Theorem 4.
Let L ( x ) SV ( ω ) be a function with remainder term ω λ ( x ) = O * L ( x ) / x σ for some σ R + . If it is differentiable, then for functions R ( x ) = x ρ L ( x ) from the class R ρ the following asymptotic relation holds:
x R ( x ) R ( x ) = ρ + O * 1 / x σ .
Proof. 
Due to the differentiability of the function L ( x ) we have
R ( x ) = ρ R ( x ) x + x ρ L ( x )
In view of (2.1), we write the integral representation
R ( x ) = x ρ exp η ( x ) + b ε ( u ) u d u ,
where η ( x ) = C η + O * 1 / x σ and ε ( x ) = O * 1 / x σ , which is proved in Theorem 3. At the same time, from the same representation (2.1) we easily obtain the following relation:
L ( x ) = L ( x ) O * 1 / x σ + 1 + ε ( x ) x = L ( x ) x O * 1 / x σ .
Now, combining the relations (2.6)–(2.8), we obtain
R ( x ) = R ( x ) x ρ + O * 1 / x σ ,
which is equivalent to relation (2.5). □
The following statement is an analog of Karamata’s theorem for functions from the class L ( x ) SV ( ω ) on the asymptotes of the integral, in which the estimate of the next term of the integral’s value is established.
Theorem 5.
Let L ( x ) SV ( ω ) be a locally bounded function on the set a , R + and it has a remainder ω λ ( x ) = O * L ( x ) / x σ for some σ R + . Then for all α 1 , the following relation holds :
a x t α L ( t ) d t = 1 α + 1 x α + 1 L ( x ) 1 + O 1 / x β ,
where β = min σ , α + 1 .
Proof. 
Denote
I ( x ) : = a x t α L ( t ) d t .
Replacing the variable t = u x , we have
I ( x ) = x α + 1 a / x 1 u α L ( u x ) d u = x α + 1 L ( x ) a / x 1 u α d u x α + 1 L ( x ) a / x 1 λ α 1 L ( λ x ) L ( x ) d λ .
Obviously, a / x 1 u α d u = x α + 1 a α + 1 ( 1 + α ) x α + 1 . To estimate the second integral, we can easily verify that the function L ( x ) satisfies the assertions of Theorem 3 with the modulus of the remainder | ω λ ( x ) | = 1 L ( λ x ) / L ( x ) and
L ( λ x ) L ( x ) 1 = O * 1 / x σ
for all λ ( 0 , 1 ] . Then we obtain the following equality:
I ( x ) = 1 α + 1 x α + 1 L ( x ) 1 + τ ( x ) ,
where
τ ( x ) = O 1 x α + 1 + O 1 x σ .
To complete the proof of the theorem, it suffices to choose the degree of decrease of the tail of τ ( x ) in the order of O 1 / x β , where β = min σ , α + 1 . Then the representation (2.10) can be written as (2.9).
The theorem is proved. □
Now consider the integral a x t α L ( t ) d t for α , 1 . Take some normalized function L 0 ( x ) SV ( ω ) defined on a , R + . It is known that such functions admit the integral representation L 0 ( x ) = exp b x ε ( u ) / u d u for some b R + . We introduce the integral
J ( x ) : = a x t α L 0 ( t ) d t
and using the formula of integration by parts we write it in the following form:
J ( x ) = 1 α + 1 L 0 ( t ) t α + 1 a x + 1 α + 1 a x t α + 1 d L 0 ( t ) .
It is obvious that d L 0 ( t ) = L 0 ( t ) ε ( t ) d t / t . Taking this equality into account, from the relation (2.11) we write out
a x t α L 0 ( t ) 1 1 α + 1 ε ( t ) d t = 1 α + 1 L 0 ( t ) t α + 1 a x .
Now define the function
L ( x ) : = L 0 ( t ) 1 1 α + 1 ε ( t ) .
Then
L 0 ( x ) L ( x ) = 1 + O * ε ( x )
and L 0 ( x ) SV . Thus, relations (2.12) and (2.13) lead us to the following conclusion: for every normalized function L 0 ( x ) there exists a function L ( x ) from the class SV with property (2.13) such that the following relation holds:
a x t α L ( t ) d t = 1 α + 1 1 a α + 1 L 0 ( a ) 1 x α + 1 L 0 ( x ) .
On the other hand, as Theorem 3 states, for any function ( x ) SV there exists a function v ( x ) such that v ( x ) v R + as x and the following integral representation is allowed:
( x ) = v ( x ) exp b x ε ( u ) u d u .
Therefore, there exists a normalized function 0 ( x ) SV such that ( x ) / 0 ( x ) v as x . Therefore, formula (2.14) is true for all functions in the class of SV .
We have thus proved the following theorem.
Theorem 6.
For any function L ( x ) SV defined on the set a , R + , the integral
I ( x ) : = a x t α L ( t ) d t
converges for all α , 1 , and there exists a normalized function L 0 ( x ) SV with the property (2.13) such that the value of the integral I ( x ) is calculated by the formula (2.14).
The next statement follows from Theorem 6.
Corollary 1.
Let the function L ( x ) SV ( ω ) from Theorem 6 satisfy the condition SR 1 with remainder ω λ ( x ) = O L ( x ) / x σ for σ R + . Then for the tail of the integral a x t α L ( t ) d t the following asymptotic estimation is true:
x t α L ( t ) d t = 1 α + 1 1 x α + 1 L ( x ) 1 + O * 1 / x σ .
Proof is based on the proof scheme of Theorem 5. □

3. On Applications of SV-Functions

In conclusion, we state one important application of SV-functions in the theory of branching random systems.
Let N be the set of natural numbers and N 0 = { 0 } N . Denote by Z ( t ) the population size at time t T : = [ 0 , + ) in a homogeneous continuous-time Markov branching system with branching law intensity a j , j N 0 . The family of random variables Z ( t ) , t T forms a homogeneously continuous-time decomposable Markov chain with state space S 0 = { 0 } S , where 0 is the only absorbing state and S N is the class of all communicating states. We introduce the conditional probability P i * : = P * Z ( 0 ) = i given that at the initial moment there are i S particles in the system. The transition probabilities of this chain p i j ( t ) = P i Z ( t ) = j for any i , j S 0 are determined by the i-fold convolution of the probability p j ( t ) : = p 1 j ( t ) . The probability generating function (GF)
F ( t ; s ) : = j S 0 p j ( t ) s j
admits the following local representation:
F ( τ ; s ) = s + f ( s ) · τ + o ( τ ) as τ 0
for all s [ 0 , 1 ) , where
f ( s ) = j S 0 a j s j .
The parameter
m : = j S j a j = f ( 1 )
denotes the average intensity of the law of transformation of one particle, which essentially regulates the asymptotic behavior of the trajectories of the system Z ( t ) . The probability of degeneration q of a Markov branching system, as the smallest positive root of the equation f ( s ) = 0 on the set [ 0 , 1 ] , is equal to 1 when m 0 and less than 1 when m > 0 . In this regard, three types of the system Z ( t ) are distinguished in accordance with its asymptotic behavior. It is called subcritical, critical, and supercritical if m < 0 , m = 0 , and m > 0 , respectively. A detailed description of Markov branching random systems and classical results on the structural and asymptotic properties of these systems are contained in the monographs [1,2,6,7,17,23].
In the papers [5,12,19,20,21] deeper properties of some models of branching systems are studied using elements of Karamata SV-functions. The main advantage of using SV-functions is that in this context one can bypass the conditions of the finiteness of the factorial moments of integer order of the particle transformation intensity laws. For example, if in the critical case, we assume the condition that the function f ( s ) admits for all s [ 0 , 1 ) the representation
f ( s ) = ( 1 s ) 1 + ν M 1 1 s
for 𝒱 [0,1), where M ( * ) , then f (1−) = ∞. The arguments in the works [5,9,10,11,12,13,14,15,16,18,], based on the condition [ M ( 𝒱 ) ], contributed to the refine of several classical theorems established under the condition of finite dispersion of the law of intensities of particle transformation in critical branching systems.
In what follows we consider a non-critical branching system with probability of degeneration q > 0 and in the case when p 0 ( t ) + p 1 ( t ) > 0 for any t T , which corresponds to the case of branching random systems of Schröder type [18]. In accordance with the class SV we denote by SV 0 the class of SV-functions at zero.
In the paper [12] the following result was established.
Lemma 1.
Let β : = exp { f ( q ) } and R q ( t ; s ) : = 1 F ( t ; q s ) / q . There exist functions ( * ) and L ( * ) from the class SV 0 such that for all s [ 0 , 1 ) the following representation holds:
R q ( t ; s ) = ( 1 s ) L β ( t ; 1 s ) β t
for any t T , where
L β ( t ; x ) = ( x ) · L x ( x ) β t
for all x ( 0 , 1 ] . At that ( 1 ) = 1 and lim x 0 ( x ) L ( x ) = 1 .
We note now some important consequences of this lemma. For s = 0 , from the relation (3.1) we find the probability of the population size being positive in the system Z ( t ) , t T at the final moment of time:
P t < H < = q R q ( t ; 0 ) = q L β ( t ; 1 ) β t = q L β t β t ,
here the variable H : = inf t T : Z ( t ) = 0 denotes the moment of degeneration of the branching system initiated by the single founder. If we additionally require that condition
j S a j j ln j < , [ x log x ]
is satisfied, then L ( u ) 1 / μ as u 0 , where μ is the mathematical expectation of the invariant distribution of the non-critical Markov branching system. Therefore, we have the following deeper information for the desired probability
β t · P t < H < q μ as t .
Since in the subcritical case q = 1 and m ln β , then P H < = 1 and the probability of survival of the system at the current moment P Z ( t ) > 0 = R 1 ( t ; 0 ) . Therefore, it is easy to calculate that the mathematical expectation of the number of particles on positive trajectories of the subcritical system varies slowly at zero, i.e.
E Z ( t ) | Z ( t ) > 0 = E Z ( t ) P Z ( t ) > 0 = L m e m t ,
where L m ( * ) L ( * ) = 1 . If we again require the condition [xlogx] x log x to be satisfied, we obtain that L m ( u ) μ as u 0 . Then the conditional mathematical expectation E Z ( t ) | Z ( t ) > 0 slowly stabilizes with increasing t, approaching μ . For a known value of μ , one can find an expression for the famous Kolmogorov constant K , from the theory of subcritical Markov branching systems, in the form K = 1 / μ . Note that the explicit form of the Kolmogorov constant in the case of branching systems with discrete time and finite dispersion of the particle transformation law was calculated for the first time in the recent work [8].
According to the definition, we can write that for any λ R +
L m ( λ u ) L m ( u ) = 1 + ω λ ( u ) ,
where ω λ ( u ) is infinitesimal at u 0 . Then according to Theorem 3
L m ( u ) = μ + ρ ( u ) ,
where ρ ( u ) 0 as u 0 . Thus, it is not difficult to establish an analog of Theorem 3 for the class of MM functions at zero. Consequently, by the definition of the class of functions SV , the function L m ( u ) belongs to the class SV 0 with remainder term ω λ ( u ) that satisfies one of the following conditions as u 0 :
SR 0 1 )
ω λ ( u ) = O ρ ( u ) ;
SR 0 2 )
ω λ ( u ) c o n s t ( λ ) ρ ( u ) ;
SR 0 3 )
ω λ ( u ) = o ρ ( u ) .
In this case, we obtain the following asymptote for the conditional mathematical expectation of the population size:
E Z ( t ) | Z ( t ) > 0 = 1 K + τ ( t ) ,
where τ ( t ) 0 as t . At the same time, we note that if we additionally assume the finiteness of the second infinitesimal moment f ( 1 ) , then following the arguments from the paper [8], we can find an explicit expression for the constant K and determine the rate of decrease of the remainder τ ( t ) .
The above Lemma 1 and its corollaries indicate that the property of regular variation is implicitly present in the asymptotic structure of a non-critical Markov branching system. Similar situations are observed in many other mathematical structures; see [3].

4. Conclusion

The main characteristic properties of slowly varying functions in the sense of Karamata were presented in the main theorems: the uniform convergence theorem and the integral representation theorem; see ([22] Ch.1). In this paper, we study the class of slowly varying functions with remainder. The concept of slow change with remainder reveals deeper properties of Karamata functions. The main reason for the appearance of this paper is the need for the concept of slow change with remainder in the theory of branching random systems. Namely, in the work of Imomov and Tukhtaev [9], Karamata functions with remainder were used for the first time, which made it possible to estimate the tails of asymptotic expansions in limit theorems of the theory of discrete branching systems with immigration; see also, [10,11,13,14,15]. In connection with the above, it is of great importance to establish analogs of a number of the main theorems of the theory of Karamata functions for the case of slow change with remainder.
In total, the article proves 6 theorems, which are new in the sense that the results obtained in them improve classical results, giving explicit estimates of tails in asymptotic formulas. Promisingly these theorems will certainly contribute to the establishment of new and improvement of existing theorems in areas of research where methods of asymptotic analysis are used in combination with the slow variation conception.

References

  1. Asmussen, S. and Hering H. Branching processes, Birkhäuser, Boston, 1983. [Google Scholar]
  2. Athreya K., B. and Ney P. E. Branching processes, Springer, New York, 1972. [Google Scholar]
  3. Bingham N., H. , Goldie C. M. and Teugels J. L. Regular variation, Cambridge, 1987. [Google Scholar]
  4. Chen, X. , He H. Lower deviation and moderate deviation probabilities for maximum of a branching random walk. arxiv.org/abs/1807.08263 2018, 33p.
  5. Formanov Sh., K. , Azimov Zh. B. Markov branching processes with regularly varying generating function and immigration of a special form. Theory Prob. and Math. Stat. 2002, 65, 181–188. [Google Scholar]
  6. Haccou, P. , Jagers P., and Vatutin V. Branching Processes: Variation, Growth, and Extinction of Populations, Cambridge University Press, 2007. [Google Scholar]
  7. Harris T., E. Theory of Branching stochastic process. Springer, Berlin, 1963.
  8. Imomov A., A. , Murtazaev M. On the Kolmogorov constant explicit form in the theory of Discrete-time Stochastic Branching Systems. J. Appl. Prob. To appear. [Google Scholar]
  9. Imomov, A.A. and Tukhtaev E.E. On asymptotic structure of critical Galton-Watson branching processes allowing immigration with infinite variance. Stochastic Models. 2023; 39, 118–140. [Google Scholar]
  10. Imomov A., A. , Meyliev A. Kh. On the application of slowly varying functions with remainder in the theory of Markov branching processes with mean one and infinite variance. Ukr. Math. Jour. 2022; 78, 1225–1237. [Google Scholar]
  11. Imomov A., A. , Meyliev A. Kh. On asymptotic structure of continuous-time Markov branching processes allowing immigration without higher-order moments. Ufimsk. Mat. Zh. 2021; 13, 137–147. [Google Scholar]
  12. Imomov A., A. , Meyliev A. Kh. On the asymptotic structure of non-critical Markov stochastic branching processes with continuous time. Vestn. Tomsk. Gos. Univ. Mat. Mekh. 2021; 22–36. [Google Scholar]
  13. Imomov A., A. On a limit structure of the Galton-Watson branching processes with regularly varying generating functions. Prob. and Math. Stat. 2019; 39, 61–73. [Google Scholar]
  14. Imomov A., A. , Tukhtaev E. E. On application of slowly varying functions with remainder in the theory of Galton-Watson branching process. J. Sib. Fed. Univ. Math. Phys. 2019; 12, 51–57. [Google Scholar]
  15. Imomov A., A. On conditioned limit structure of the Markov branching process without finite second moment. Malaysian Jour. Math. Sci. 2017; 11, 393–422. [Google Scholar]
  16. Imomov A., A. On long-term behavior of continuous-time Markov Branching Processes allowing immigration. J. Sib. Fed. Univ. Math. Phys. 2014; 7, 443–454. [Google Scholar]
  17. Jagers, P. Branching Progresses with Biological applications. JW & Sons, Pitman Press, UK, 1975.
  18. Pakes A., G. Critical Markov branching process limit theorems allowing infinite variance. Adv. Appl. Prob. 2010; 42, 460–488. [Google Scholar]
  19. Pakes A., G. Revisiting conditional limit theorems for the mortal simple branching process. Bernoulli, 1999; 5, 969–998. [Google Scholar]
  20. Pakes A., G. Limit theorems for the simple branching process allowing immigration, I. The case of finite offspring mean. Adv. Appl. Prob. 1979; 11, 31–62. [Google Scholar]
  21. Seneta, E. Regularly Varying Functions in the Theory of Simple Branching Processes. Adv. Appl. Prob. 1974; 6, 408–420. [Google Scholar]
  22. Seneta, E. Regularly varying functions. Springer, Berlin, 1972; Translated Russian. Nauka, Moscow, 1985.
  23. Sevastyanov, B.A. Branching Process. Nauka, Moscow, 1971. (Russian).
  24. Zolotarev V., M. More Exact Statements of Several Theorems in the Theory of Branching Processes. Teor. Veroyatnost. i Primenen. 2(2) 1957, pp. 256–266; Theory Probab. Appl. 2(2) 1957, pp. 245–253.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated