Preprint
Article

Randomly Stopped Sums, Minima and Maxima for Heavy-tailed and Light-Tailed Distributions

Submitted:

26 April 2024

Posted:

28 April 2024

You are already at the latest version

A peer-reviewed article of this preprint also exists.

Abstract
This paper investigates the randomly stopped sums, minima and maxima of heavy- and light-tailed random variables. The conditions on the primary random variables, which are independent but generally not identically distributed, and counting random variable are given in order that the randomly stopped sum, random minimum and maximum is heavy/light tailed. The results generalize some existing ones in the literature. The examples illustrating the results are provided.
Keywords: 
Subject: 
Computer Science and Mathematics  -   Probability and Statistics

1. Introduction

This paper is devoted to the randomly stopped sums, minima and maxima of heavy- and light-tailed random variables (r.v.s). Such objects appear when the number of the random variables under consideration is unknown and is described by some random integer. In particular, randomly stopped sums appear in such fields as insurance and financial mathematics, survival analysis, risk theory, computer and communication networks, etc. The area of randomly stopped sums for heavy-tailed r.v.s is well-developed for more than 50 years and covers mainly the case of independent identically distributed (i.i.d.) r.v.s. In this paper we consider the case where the underlying r.v.s are not necessary identically distributed although are independent.
Specifically, suppose that X 1 , X 2 , are r.v.s defined on the probability space ( Ω , F , P ) . Define a sequence of partial sums { S n , n 0 } by
S 0 : = 0 , S n : = X 1 + + X n , n 1 .
The main subject of the paper lies in the study of randomly stopped sums
S ν : = X 1 + + X ν ,
where n in (1) is replaced by a random variable ν taking values in N 0 : = { 0 , 1 , 2 , } . Throughout the paper, we assume that ν is not degenerate at zero, i.e. P ( ν > 0 ) > 0 . We will call such ν a counting random variable.
Further, we will assume that r.v.s X 1 , X 2 , are independent and counting r.v. ν is independent of the sequence { X 1 , X 2 , } . In general, r.v.s X 1 , X 2 , can be not identically distributed, each having a distribution function (d.f.) F X k ( x ) = P ( X k x ) , respectively. Consider the d.f.
F S ν ( x ) = P ( S ν x ) = n = 0 P ( S n x ) P ( ν = n ) .
The main task considered in the paper is to give conditions guaranteeing that F S ν is heavy/light-tailed provided that some of the d.f.s F X k or F ν are heavy/light-tailed.
Other objects of the paper are the randomly stopped minima and maxima. By the randomly stopped minimum of sums we call the minimum of partial sums:
S ( ν ) = min { S 1 , , S ν } , ν 1 , 0 , ν = 0 ,
and by the randomly stopped maximum of sums we call the maximum of partial sums:
S ( ν ) = max { 0 , S 1 , , S ν } .
Also, we provide some results for randomly stopped minimum
X ( ν ) = min { X 1 , , X ν } , ν 1 , 0 , ν = 0 ,
and randomly stopped maximum
X ( ν ) = max { 0 , X 1 , , X ν } .
Similarly, we are interested when F X ( ν ) , F X ( ν ) , F S ( ν ) and F S ( ν ) are heavy-tailed or light tailed. For various distribution classes, similar questions were studied in [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30]. We mention also the paper [31], where two independent heavy-tailed r.v.s, such that their minimum is not heavy tailed, were constructed.
The structure of the paper is as follows. In Section 2 we introduce heavy- and light-tailed distributions and formulate two auxiliary lemmas. Main results are formulated in Section 3. Some examples of non-standard heavy-tailed and light-tailed distributions are presented in Section 4. The heaviness of the distribution tails presented in Section 4 is determined on the basis of the statements formulated in Section 3. The proofs of main results are presented in Section 5. The last Section 6 is devoted to the discussion of the obtained results in the broadest context together with the highlight of the future research directions.

2. Heavy-tailed and light-tailed distributions

For any distribution F, define its Laplace-Stieltjes transform as
F ^ ( λ ) : = e λ x d F ( x ) , λ R .
A distribution F is said to be heavy-tailed, denoted F H , if
F ^ ( λ ) = for any λ > 0 .
Otherwise, F is said to be light-tailed. Common examples of heavy-tailed distributions are Pareto, Log-normal, Weibull with shape parameter τ ( 0 , 1 ) , Burr, Student’s t distributions. For the detailed exposition of the heavy-tailed distributions and their properties we refer to the monographs [32,33,34,35,36,37,38].
We formulate two lemmas that will be used in the proofs of several main propositions. Although the results of the lemmas are well-known and can be found, e.g., in [35,37,38], we provide the proofs for the sake of convenience. First lemma gives equivalent conditions for the distribution F to be heavy/light tailed.
Lemma 1. 
Suppose that F is a d.f. of a real-valued r.v. The following statements are equivalent:
(i) F  is heavy-tailed,
(ii) lim sup x e λ x F ¯ ( x ) = for any λ > 0 ,
(iii) lim sup x x 1 log F ¯ ( x ) = 0 .
Similarly, the equivalent are the following statements:
(i’) F  is light-tailed,
(ii’) lim sup x e λ x F ¯ ( x ) < for some λ > 0 ,
(iii’) lim sup x x 1 log F ¯ ( x ) < 0 .
Proof. 
We prove only the first part of the lemma.
(i) ⇒ (iii). Suppose that F ^ ( λ ) = for any λ > 0 . Let, on the contrary,
lim sup x log F ¯ ( x ) x < 0 .
Then there exist constants c > 0 and x c > 0 such that x 1 log F ¯ ( x ) c for x x c or, equivalently,
F ¯ ( x ) e c x , x x c .
For any δ ( 0 , c ) , using (2) and the alternative expectation formula (see [39], for instance), we obtain
[ 0 , ) e δ u d F ( u ) = 1 + δ 0 e δ u F ¯ ( u ) d u = 1 + 1 e δ x c + e δ x c F ¯ δ 1 log u d u e δ x c + e δ x c e c δ 1 log u d u = e δ x c + e δ x c u c δ 1 d u .
Since c δ 1 > 1 , the last integral is finite, hence
F ^ ( δ ) F ( 0 ) + [ 0 , ) e δ u d F ( u ) < ,
leading to a contradiction.
(iii) ⇒ (ii). From the condition
lim sup x x 1 log F ¯ ( x ) = 0
we get that there exists an infinitely increasing sequence { x n } such that
lim n x n 1 log F ¯ ( x n ) = 0 .
For any given λ > 0 , this implies that there exists n λ 1 such that
x n 1 log F ¯ ( x n ) λ / 2
for all n n λ . Equivalently,
e λ x n F ¯ ( x n ) e λ x n / 2 , n n λ .
Hence, e λ x n F ¯ ( x n ) tends to infinity as n , and thus
lim sup x e λ x F ¯ ( x ) lim n e λ x n F ¯ ( x n ) = .
Since this holds for any λ > 0 , we have (ii).
(ii) ⇒ (i). Let
lim sup x e λ x F ¯ ( x ) =
for any λ > 0 . For any x R write
e λ u d F ( u ) ( x , ) e λ u d F ( u ) e λ x F ¯ ( x ) .
Thus,
F ^ ( λ ) lim sup x e λ x F ¯ ( x ) = for any λ > 0 .
Lemma 1 is proved. □
The next lemma implies that H and H c are closed with respect to weak tail equivalence.
Lemma 2. 
Let F and G be two distributions of real-valued r.v.s.
(i)If F H and
lim inf x G ¯ ( x ) F ¯ ( x ) > 0 ,
then G H .
(ii)If F H c , and G ¯ ( x ) c ˜ F ¯ ( x ) for some c ˜ > 0 and large x ( x > x c ˜ ), then G H c .
Proof. 
Consider part (i). By condition (3) we get that
G ¯ ( x ) c ^ F ¯ ( x )
for some c ^ and sufficiently large x ( x > x c ^ ). Therefore,
lim sup x e λ x G ¯ ( x ) c ^ lim sup x e λ x F ¯ ( x ) =
for any positive λ implying G H by Lemma 1 (ii).
Proof of part (ii) can be constructed in a similar way by using Lemma 1 (ii’) showing that
lim sup x e λ x G ¯ ( x ) <
for some λ > 0 . Lemma 2 is proved. □

3. Main results

In this section we formulate the main results of the paper. We start with the randomly stopped sums. We notice that the d.f. F S ν can become heavy-tailed because of heavy tail of some element in { F X 1 , F X 2 , } or because of heavy tail of the counting random variable ν .
Proposition 1. 
Let X 1 , X 2 , be independent real-valued r.v.s and let ν be a counting r.v. independent of the sequence { X 1 , X 2 , } . Distribution F S ν is heavy-tailed if at least one of the following conditions is satisfied:
(i) inf k 1 E e λ X k > 1 for any λ > 0 , and F ν H ;
(ii) inf k 1 P ( X k a ) = 1 for some a > 0 , and F ν H ;
(iii) F X ϰ H for some ϰ 1 , and F ν ¯ ( x ) > 0 for all x R ;
(iv) F X ϰ H for some 1 ϰ max { supp ( ν ) } and supp ( ν ) < .
Distribution F S ν is light-tailed if at least one of the following conditions is satisfied:
(v) F X 1 H c , F ν H c , F X 1 ¯ ( x ) > 0 for all x R and
lim sup x sup k 1 F X k ¯ ( x ) F X 1 ¯ ( x ) < ;
(vi) sup k 1 E e λ X k < for some λ > 0 , and F ν H c .
Our next statement is about the randomly stopped maximum of r.v.s. We observe that some conditions under which the distribution of the randomly stopped maximum F X ( ν ) becomes heavy-tailed are the same as in Proposition 1. Unfortunately, we did not find how to make a heavy-tailed distribution F X ( ν ) from the light-tailed primary r.v.s { X 1 , X 2 , } .
Proposition 2. 
Let X 1 , X 2 , be independent real-valued r.v.s and let ν be a counting r.v. independent of the sequence { X 1 , X 2 , } .
(i) If F X ϰ H for some ϰ 1 and F ν ¯ ( x ) > 0 for all x R , then F X ( ν ) H ;
(ii) If F X ϰ H for some ϰ max { supp ( ν ) } < , then F X ( ν ) H ;
(iii)Distribution F X ( ν ) belongs to the class H c if F X 1 H c , F X 1 ¯ ( x ) > 0 for all x R , E ν < and
lim sup x sup n 1 1 n k = 1 n F X k ¯ ( x ) F X 1 ¯ ( x ) < .
The statement below is on the distribution of the randomly stopped minimum of r.v.s. From the formulation below, we observe that the tail of the d.f. F X ( ν ) has much less chance of becoming heavy compared to the d.f.s F S ν and F X ( ν ) .
Proposition 3. 
Let X 1 , X 2 , be independent real-valued r.v.s and let ν be a counting r.v. independent of the sequence { X 1 , X 2 , } .
(i)If F X 1 H and
lim inf x min 1 k ϰ F X k ¯ ( x ) F X 1 ¯ ( x ) > 0
for ϰ = min { supp ( ν ) { 0 } } , then F X ( ν ) H and
F X ( ν ) ¯ ( x ) x F X ( ϰ ) ¯ ( x ) ;
(ii)If F X k H c for 1 k ϰ = min { supp ( ν ) { 0 } } , then F X ( ν ) H c .
The next two statements are on the heaviness of randomly stopped minimum of sums and randomly stopped maximum of sums. It can be seen from the presented formulations that some of the conditions were already present in the previous statements. However, for the sake of clarity, we present the full statements on the heaviness of F S ( ν ) and F S ( ν ) .
Proposition 4. 
Let X 1 , X 2 , be independent real-valued r.v.s and let ν be a counting r.v. independent of the sequence { X 1 , X 2 , } .
(i)If F X 1 H and min 1 k ϰ P ( X k 0 ) > 0 for ϰ = min supp ( ν ) { 0 } , then F S ( ν ) H and
F S ( ν ) ¯ ( x ) x F X 1 ¯ ( x ) .
(ii)If F X 1 H c , then F S ( ν ) H c for any r.v. ν.
Proposition 5. 
Let { X 1 , X 2 , } and ν be r.v.s. such as in Propositions 1-4. Then F S ( ν ) H if at least one of the following conditions is satisfied:
(i) inf k 1 E e λ X k > 1 for all λ > 0 and F ν H ;
(ii) inf k 1 P ( X k a ) = 1 for some a > 0 and F ν H ;
(iii) F X 1 H ;
(iv) F X ϰ H for some ϰ 1 in the case of infinite supp ( ν ) or for some 1 ϰ max { supp ( ν ) } in the case of finite supp ( ν ) .
Distribution F S ( ν ) is light-tailed if at least one of the following two conditions is satisfied:
(v) sup k 1 E e λ X k < for some λ > 0 and F ν H c .
In the i.i.d. case, Proposition 1 immediately implies the following corollaries. Note that the first two corollaries can be found in monograph [35] as Problems 2.12 and 2.13.
Corollary 1. 
Let X 1 , X 2 , be i.i.d. real-valued r.v.s with common distribution F X 1 , and let ν be a counting r.v. independent of { X 1 , X 2 , } . If F X 1 H c and F ν H c then F S ν H c .
Corollary 2. 
Let X 1 , X 2 , be i.i.d. nonnegative not degenerate at zero r.v.s, and let ν be a counting r.v. independent of { X 1 , X 2 , } . If F ν H then F S ν H .
Corollary 3. 
Let X 1 , X 2 , be i.i.d. real-valued r.v.s with common distribution F X 1 , and let ν be a counting r.v. independent of { X 1 , X 2 , } . If F X 1 H then F S ν H .
Analogous corollaries can be formulated for randomly stopped minima and maxima.

4. Examples

In this section, we present two examples showing how one concretely can construct heavy-tailed distributions by using the above randomly stopped structures.
Example 1. 
Let { X 1 , X 2 , } be a sequence of independent r.v.s such that the first member X 1 has the Pareto distribution
F X 1 ( x ) = 1 1 ( 1 + x ) 3 1 [ 0 , ) ( x ) ,
and other elements of the sequence are identically exponentially distributed:
F X k ( x ) = 1 e x 1 [ 0 , ) ( x ) , k { 2 , 3 , }
According to Proposition 1 (parts (iii) and (iv)) and Proposition 5 (iii) distributions F S ν and F S ( ν ) are heavy-tailed for any counting r.v. independent of the sequence { X 1 , X 2 , } . For instance, in the case of the discrete uniform counting r.v. with parameter N 2 , we have that distributions with the tail
F S ν ¯ ( x ) = F S ( ν ) ¯ ( x ) = 1 ( , 0 ) ( x ) + 1 ( 1 + x ) 3 + 1 N k = 1 N 1 1 ( k 1 ) ! 0 x y k 1 e y ( 1 + ( x y ) ) 3 d y 1 [ 0 , ) ( x )
belong to the class H . Proposition 2 (ii) implies that distribution F X ( ν ) belongs to the class H for any counting r.v. ν independent of { X 1 , X 2 , } . Meanwhile Proposition 3 (i) and Proposition 4 (i) imply that F X ( ν ) and F S ( ν ) are heavy-tailed for counting r.v. under condition 1 supp ( ν ) . In the case of the discrete uniform counting r.v. ν with parameter N = 3 , we have that F S ( ν ) = F X 1 and distributions with the following tails are heavy-tailed:
F X ( ν ) ¯ ( x ) = 1 ( , 0 ) ( x ) + 1 ( 1 + x ) 3 + e x e 2 x 3 1 1 ( 1 + x ) 3 1 [ 0 , ) ( x ) , F X ( ν ) ¯ ( x ) = 1 ( , 0 ) ( x ) + 1 3 ( 1 + x ) 3 1 + e x + e 2 x 1 [ 0 , ) ( x ) .
Example 2. 
Let { X 1 , X 2 , } be a sequence of independent r.v.s uniformly distributed on the interval [ 0 , 1 ] , i.e.
F X k ( x ) = x 1 [ 0 , 1 ) ( x ) + 1 [ 1 , ) ( x )
for each k N .
Obviously,
E e λ X k = e λ 1 λ > 1
for any λ > 0 and all k N . Therefore, by Proposition 1 (i) and Proposition 5 (i) we get that distributions F S ν and F S ( ν ) are heavy-tailed for an arbitrary heavy tailed counting r.v. ν independent of { X 1 , X 2 , } . Suppose that counting r.v. ν is distributed according to the zeta distribution with parameter 2:
P ( ν = n ) = 1 n 2 1 ζ ( 2 ) , n N ,
where
ζ ( s ) = n = 1 1 n s , s C ,
denotes the Riemann zeta function. Such ν is heavy-tailed. Propositions 1 (i) and 5 (i) imply that distribution
F S ν ( x ) = F S ( ν ) ( x ) = 1 ζ ( 2 ) n = 1 1 n 2 F X 1 * n ( x ) 1 [ 0 , n ] ( x )
belongs to the class H , where
F X 1 * n ( x ) = 1 n ! k = 0 x ( 1 ) k n k ( x k ) n
is the well-known Irwin-Hall distribution with parameter n, see [40,41] or Section 26.9 in [42]. Meanwhile propositions 3 (ii) and 4 (ii) imply that distributions with tails
F S ( ν ) ¯ ( x ) = F X 1 ¯ ( x ) , F X ( ν ) ¯ ( x ) = 1 ( , 0 ) ( x ) + 1 ζ ( 2 ) n = 1 1 n 2 ( 1 x ) n 1 [ 0 , 1 ) ( x )
are light-tailed despite the fact that counting r.v. ν distributed according to the zeta distribution is heavy-tailed.

5. Proofs of the main results

In this section, we present the proofs of all main propositions. We assign a separate subsection to the proof of each proposition.

5.1. Proof of Proposition 1

Proof of part (i) For any λ > 0 and an arbitrary K 1 , we have
E e λ S ν = E e λ S ν n = 0 1 { ν = n } = E n = 0 e λ S n 1 { ν = n }
E n = 0 K e λ S n 1 { ν = n } = n = 0 K E e λ S n P ( ν = n ) .
From the condition
inf k 1 E e λ X k > 1
we derive that the estimate
min 1 k K E e λ X k Δ
holds for some Δ = Δ ( λ ) > 1 . Therefore, for all n { 1 , , K } we obtain
E e λ S n = k = 1 n E e λ X k Δ n .
This together with (7) imply that
E e λ S ν n = 0 K Δ n P ( ν = n ) .
Since F ν H , we have
n = 0 K Δ n P ( ν = n ) = E e ν log Δ I { ν K } K .
Hence, E e λ S ν = implying F S ν H by definition. Part (i) of the proposition is proved. □
Proof of part (ii) Let us fix an arbitrary λ > 0 . Due to the conditions of part (ii), for such λ we have
inf k 1 E e λ X k = inf k 1 E e λ X k 1 { X k a } + E e λ X k 1 { X k < a } inf k 1 E e λ X k 1 { X k a } inf k 1 e λ a P ( X k a ) = e λ a > 1 .
Hence the assertion of part (ii) follows from part (i) of the proposition. □
Proof of part (iii) The requirement F ν ¯ ( x ) > 0 for all x R implies that counting r.v. ν has an unbounded support. Thus we can find K ϰ such that P ( ν = K ) > 0 . Let λ be any positive number and M 1 . Then
E e λ S K E exp λ k = 1 K X k I { X k M } = e λ X ϰ I { X ϰ M } k = 1 k ϰ K E e λ X k I { X k M } M
because F ϰ H and E e λ X k > 0 for each k { 1 , , K } . Therefore, F S K H . By representation (7) we get that
E e λ S ν P ( ν = K ) E e λ S K
implying F S ν H . This completes the proof of part (iii) of the proposition. □
Proof of part (iv) Let K be such that P ( ν = K ) > 0 and ϰ K . Clearly, conditions of part (iv) imply the existence of such K. To finish the proof of this part, it is sufficient to repeat the arguments of part (iii). □
Proof of part (v) Suppose that 0 < δ λ , and λ > 0 is such that E e λ X 1 + < with X 1 + : = X 1 I { X 1 0 } . By the standard representation (7) we have
E e δ S ν = n = 0 E e δ S n P ( ν = n ) n = 0 E e δ S n + P ( ν = n ) ,
where S 0 + = 0 and
S n + = k = 1 n X k + = k = 1 n X k I { X k 0 } , n { 1 , 2 , } .
Condition (4) implies
F X k ¯ ( x ) c 1 F X 1 ¯ ( x )
for some c 1 > 0 , all k 1 and all x R . Therefore, by the alternative expectation formula (see, for instance, [39]), we derive from (10) that
E e δ X k + = 1 + δ 0 e δ u F X k + ¯ ( u ) d u 1 + δ c 1 0 e λ u F X 1 ¯ ( u ) d u = 1 + δ λ c 1 E e λ X 1 + 1 : = c 2 ( δ )
for any k 1 , where 1 < c 2 ( δ ) < for 0 < δ λ , and
lim δ 0 c 2 ( δ ) = 1 .
Since X 1 + , X 2 + , are independent r.v.s, we obtain
E e δ S n + = k = 1 n E e δ X k + c 2 ( δ ) n .
Hence, by inequality (9) and condition F ν H c we derive that
E e δ S ν n = 0 c 2 ( δ ) n P ( ν = n ) = E e ν log c 2 ( δ ) <
if δ ( 0 , λ ] is chosen sufficiently small. This implies that F S ν H c . □
Proof of part (vi) The statement of this part can be proved analogously to the statement of part (v). Namely, conditions of part (vi) imply that
sup k 1 E e λ X k + = c λ
for some constants λ > 0 and c λ 1 . Therefore, using the alternative expectation formula, we derive
E e δ X k + = 1 + δ [ 0 , ) e δ u F X k ¯ ( u ) d u 1 + δ λ λ [ 0 , ) e λ u F X k ¯ ( u ) d u = 1 + δ λ c λ 1
for all δ ( 0 , λ ) and k 1 . The last estimation and inequality (9) imply that
E e δ S ν n = 0 k = 1 n E e δ X k + P ( ν = n ) E e ν log 1 + δ λ ( c λ 1 ) .
If δ ( 0 , λ ] is sufficiently small, then the last expectation is finite because of F ν H c . Hence F S ν H c as well. Part (vi) of the proposition is proved. □

5.2. Proof of Proposition 2

Proof of part (i) By the standard representation we have
F X ( ν ) ¯ ( x ) = n = 1 P ( X ( n ) > x ) P ( ν = n ) P ( X ( K ) > x ) P ( ν = K )
for x > 0 and any K such that P ( ν = K ) > 0 , K ϰ . Due to the conditions of part (ii) there exists a sequence of numbers K with the above property. Obviously,
P ( X ( K ) > x ) = P ( max { 0 , X 1 , , X K } > x ) P ( X ϰ > x ) .
Consequently, for an arbitrary λ > 0 , we get from (11) and (12)
lim sup x e λ x F X ( ν ) ¯ ( x ) P ( ν = K ) lim sup x e λ x F X ϰ ¯ ( x ) .
The assertion of part (i) follows now by Lemma 1. □
Proof of part (ii) The proof of this part is similar to the proof of part (i), because the conditions of part (ii) imply that there exists at least one K such that K ϰ and P ( ν = K ) > 0 . □
Proof of part (iii) The standard representation implies that
F X ( ν ) ¯ ( x ) = n = 1 P ( X ( n ) > x ) P ( ν = n ) = n = 1 P k = 1 n { X k > x } P ( ν = n )
n = 1 P ( ν = n ) k = 1 n F X k ¯ ( x )
for positive x.
Due to Lemma 1, there is λ > 0 such that
lim sup x e λ x F X 1 ¯ ( x ) < .
It follows from the estimate (13) that
lim sup x e λ x F X ( ν ) ¯ ( x ) lim sup x e λ x n = 1 P ( ν = n ) k = 1 n F X k ¯ ( x ) .
Condition (5) of part (iii) implies that
k = 1 n F X k ¯ ( x ) c 4 n F X 1 ¯ ( x )
for all n 1 , for some c 4 > 0 and for sufficiently large x ( x x 1 ) . Therefore, by (15) and (16) we get that
lim sup x e λ x F X ( ν ) ¯ ( x ) c 4 E ν lim sup x e λ x F X 1 ¯ ( x ) < .
The assertion of part (iii) follows now by Lemma 1. □

5.3. Proof of Proposition 3

Proof of part (i) By the standard representation we have
F X ( ν ) ¯ ( x ) = n = 1 P min { X 1 , , X n } > x P ( ν = n ) = n = 1 P ( ν = n ) k = 1 n F X k ¯ ( x ) = F X ( ϰ ) ¯ ( x ) P ( ν = ϰ ) + n = ϰ + 1 P ( ν = n ) F X ( ϰ ) ¯ ( x ) k = ϰ + 1 n F X k ¯ ( x ) F X ( ϰ ) ¯ ( x ) P ( ν = ϰ ) 1 + F X ϰ + 1 ¯ ( x ) P ( ν > ϰ ) P ( ν = ϰ ) ,
and
F X ( ν ) ¯ ( x ) F X ( ϰ ) ¯ ( x ) P ( ν = ϰ )
for each positive x. In addition, conditions of part (i) give that F X ( ϰ ) ¯ ( x ) > 0 for all positive x. Therefore
F X ( ν ) ¯ ( x ) x F X ( ϰ ) ¯ ( x ) .
We get from this, by using Lemma 2, that F X ( ν ) H if F X ( ϰ ) H . Hence, to prove the assertion of part (i) it is enough to prove that F X ( ϰ ) H for 1 ϰ min { supp ( ν ) { 0 } } .
Due to the condition F X 1 H and Lemma 1 we have
lim sup x e λ x F X 1 ¯ ( x ) =
for an arbitrary λ > 0 . The requirement
lim inf x min 1 k ϰ F X k ¯ ( x ) F X 1 ¯ ( x ) > 0
implies that
F X k ¯ ( x ) c 5 F X 1 ¯ ( x )
for some positive c 5 , sufficiently large x ( x x 2 ) and for all 1 k ϰ . Therefore, for any positive λ and large x ( x x 2 ) we obtain
e λ x F X ( ϰ ) ¯ ( x ) = e λ x k = 1 ϰ F X k ¯ ( x ) c 5 ϰ e λ x ( F X 1 ¯ ( x ) ) ϰ = c 5 e λ x / ϰ F X 1 ¯ ( x ) ϰ .
By relation (18) we derive that
lim sup x e λ x F X ( ϰ ) ¯ ( x ) =
implying that F X ( ϰ ) H . Part (i) of the proposition is proved. □
Proof of part (ii) According to the inequality (17) and Lemma 2, F X ( ν ) H c if F X ( ϰ ) H c . Since ϰ is finite, conditions F X k H c , k { 1 , 2 , , ϰ } and Lemma 1 imply that
lim sup x e λ x F X k ¯ ( x ) <
for some λ > 0 and each k { 1 , 2 , , ϰ } . For this λ and an arbitrary positive x, we have
e λ x F X ( ϰ ) ¯ ( x ) = k = 1 ϰ e λ x / ϰ F X k ¯ ( x ) .
Since λ / ϰ λ , due to (19),
lim sup x e λ x / ϰ F X k ¯ ( x ) <
for each k { 1 , 2 , , ϰ } . Therefore,
lim sup x e λ x F X ( ϰ ) ¯ ( x ) <
implying that F X ( ϰ ) H c by Lemma 1. Hence F X ( ν ) H c as well, and part (ii) of the proposition is proved. □

5.4. Proof of Proposition 4

Proof of part (i) If ϰ = 1 , then for x > 0 we have
F S ( ν ) ¯ ( x ) = n supp ( ν ) { 0 } F S ( n ) ¯ ( x ) P ( ν = n ) F S ( 1 ) ¯ ( x ) P ( ν = 1 ) = F X 1 ¯ ( x ) P ( ν = 1 ) ,
and
F S ( ν ) ¯ ( x ) = n = 1 F S ( n ) ¯ ( x ) P ( ν = n ) = n = 1 P min { S 1 , , S n } > x P ( ν = n ) = n = 1 P k = 1 n { S k > x } P ( ν = n ) n = 1 P ( S 1 > x ) P ( ν = n ) = F X 1 ¯ ( x ) P ( ν 1 ) .
The derived estimates imply the asymptotic relation (6) in the case ϰ = 1 .
Let us now suppose that ϰ > 1 . Due to the conditions of part (i)
P ( X k 0 ) c 6
for some c 6 > 0 and all 1 k ϰ . Hence by the standard decomposition we get that for positive x
F S ( ν ) ¯ ( x ) = n = 1 F ¯ S ( n ) ( x ) P ( ν = n ) F S ( ϰ ) ¯ ( x ) P ( ν = ϰ ) = P min { S 1 , , S ϰ } > x P ( ν = ϰ ) = P k = 1 ϰ { X 1 + + X k > x } P ( ν = ϰ ) P X 1 > x , X 2 0 , , X ϰ 0 P ( ν = ϰ ) = P ( X 1 > x ) k = 2 ϰ P ( X k 0 ) P ( ν = ϰ ) c 6 ϰ 1 P ( ν = ϰ ) F X 1 ¯ ( x ) .
On the other hand, similarly as in the case ϰ = 1 , we have
F S ( ν ) ¯ ( x ) = n supp ( ν ) { 0 } P k = 1 n { S k > x } P ( ν = n ) n supp ( ν ) { 0 } P ( S 1 > x ) P ( ν = n ) = F X 1 ¯ ( x ) P ( ν ϰ ) .
Estimates (20) and (21) imply that the asymptotic relation (6) holds for any possible ϰ . In addition, we observe that, by Lemma 2, distribution F S ( ν ) belongs to H together with F X 1 . Part (i) of the proposition is proved. □
Proof of part (ii) The statement of this part follows immediately from the estimate (21) and Lemma 1 because
lim sup x e λ x F S ( ν ) ¯ ( x ) P ( ν 1 ) lim sup x e λ x F X 1 ¯ ( x )
for any λ > 0 . □

5.5. Proof of Proposition 5

Proof of part (i) Proof of this part is similar to the proof of part (i) of Proposition 1. Namely, for λ > 0 and K 2 by using (8), we get that
E e λ S ( ν ) E e λ S ( ν ) 1 { ν K } = n = 0 K E e λ S ( n ) P ( ν = n ) n = 0 K E e λ S n P ( ν = n ) n = 0 K Δ n P ( ν = n ) = E e ν log Δ 1 { ν K }
with Δ = Δ ( λ ) = inf k 1 E e λ X k > 1 . The condition F ν H implies that
lim K E e ν log Δ 1 { ν K } = .
Therefore, E e λ S ( ν ) = for an arbitrary λ > 0 , i.e. F S ( ν ) H . Part (i) of the proposition is proved. □
Proof of part (ii) The assertion of this part is obvious because condition inf k 1 P ( X k a ) = 1 with a > 0 implies that inf k 1 E e λ X k > 1 for any λ > 0 . The details of this implication are presented in the proof of Proposition 1(ii). □
Proof of part (iii) For positive x we have
F S ( ν ) ¯ ( x ) = n = 1 F S ( n ) ¯ ( x ) P ( ν = n ) = n = 1 P k = 1 n { S k > x } P ( ν = n ) n = 1 P ( S 1 > x ) P ( ν = n ) = F X 1 ¯ ( x ) P ( ν 1 ) .
The assertion of part (iii) follows now from Lemma 1 because by (22)
lim sup x e λ x F S ( ν ) ¯ ( x ) P ( ν 1 ) lim sup x e λ x F X 1 ¯ ( x )
for an arbitrary positive λ . □
Proof of part (iv) Conditions of this part and and Proposition 1 (parts (iii) and (iv)) imply that F S ν H . In addition, for positive x
F S ( ν ) ¯ ( x ) = n = 1 P max { S 1 , S 2 , , S n } > x P ( ν = n ) n = 1 P ( S n > x ) P ( ν = n ) = F S ν ¯ ( x ) .
Hence F S ( ν ) H according to the Lemma 2. Part (iv) of the proposition is proved. □
Proof of part (v) Let λ > 0 be a positive number from the condition of part (v), i.e.
sup k 1 E e λ X k = c ^ λ
with some positive constant c ^ λ . For this λ we have
sup k 1 E e λ X k + = sup k 1 E e λ X k + 1 { X k 0 } + e λ X k + 1 { X k < 0 } = sup k 1 E e λ X k 1 { X k 0 } + 1 { X k < 0 } c ^ λ + 1 ,
where X k + = X k I { X k 0 } for k { 1 , 2 , , } . Due to Proposition 1(vi) d.f. F S ν + belongs to the class H c with r.v. S ν + = X 1 + + + X ν + .
According to the standard representation, for positive x, we have
F S ( ν ) ¯ ( x ) = n = 1 P max { S 1 , S 2 , , S n } > x P ( ν = n ) n = 1 P max { S 1 + , S 2 + , , S n + } > x P ( ν = n ) = n = 1 P ( S n + > x ) P ( ν = n ) = F S ν + ¯ ( x ) .
By applying Lemma 2 we get that d.f. F S ( ν ) is light-tailed due to the light tail of d.f. F S ν + . Part (v) of the proposition is proved. □

6. Concluding remarks

In this paper, we show that both heavy-tailed and light-tailed classes of distributions have quite a number of interesting properties related to the randomly stopped structures. Based on our results, various heavy-tailed or light-tailed distributions can be constructed. On the other hand, according to the propositions we proved, in most cases it is easier to determine whether the considered distribution is light-tailed or heavy-tailed. The main novelty of our work consists in the fact that we study randomly stopped structures in a set of independent but possibly differently distributed primary random variables. In Section 1 it was mentioned that randomly stopped structures together with heavy-tailed distributions appear in such fields as insurance and financial activity, survival analysis, risk management, computer and communication networks, etc. Recently, many articles have been written on the heavy-tailed distributions, both in scientific and popular science journals. Let us mention a few of such works. Heavy-tailed distributions applied to financial losses and stochastic returns are described and discussed in the articles [43,44,45]. The influence of heavy-tailed distributions on actuarial statistics is examined in works [46,47]. The performance of heavy-tailed distributions in social and medical research is discussed in the papers [48,49]. The application of heavy-tailed distributions of a special form to study computer systems and telecommunication networks is presented in [50,51,52]. From the content of the mentioned works, it can be seen that in many cases it is quite difficult to fit heavy-tailed distributions to the real data. Therefore, our proposed transformations of heavy-tailed distributions increase the chances of choosing the right distribution. So, in our opinion, it makes sense to continue research on transformations for heavy-tailed distributions. In addition to the randomly stopped structures examined in this paper, moment transformations, random effects, and randomly stopped products can be considered for instance.

Author Contributions

Conceptualization, R.L. and J.Š.; methodology, J.Š.; software, S.D.; validation, R.L, S.D. and J.K.; formal analysis, J.K.; investigation, S.D. and J.K.; resources, J.Š; writing-original draft preparation, S.D.; writing-review and editing, R.L.; visualization, S.D. and J.K.; supervision, J.Š.; project administration, R.L.; funding acquisition, J.Š and J.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding

Institutional Review Board Statement

Not applicable

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Albin, J.M.P. A note on the closure of covolution power mixtures (random sums) of exponential distributions. J. Aust. Math. Soc. 2008, 84, 1–7. [Google Scholar] [CrossRef]
  2. Andrulytė, I.M.; Manstavičius, M.; Šiaulys, J. Randomly stopped maximum and maximum of sums with consistently varying distributions. Mod. Stoch.: Theory Appl. 2017, 4, 65–78. [Google Scholar] [CrossRef]
  3. Cheng, D.; Ni, F.; Pakes, A.G.; Wang, Y. Some properties of the exponential distribution class with application to risk theory. J. Korean Stat. Soc. 2012, 41, 515–527. [Google Scholar] [CrossRef]
  4. Cline, D.B.H. Convolutions of the distributions with exponential tails J. Aust. Math. Soc. 1987, 43, 347–365. [Google Scholar] [CrossRef]
  5. Danilenko, S.; Šiaulys, J. Random convolution of O-exponential distributions. Nonlinear Anal-Model. 2015, 20, 447–454. [Google Scholar] [CrossRef]
  6. Danilenko, S.; Šiaulys, J. Randomly stopped sums of not identically distributed heavy tailed random variables. Stat. Probab. Lett. 2016, 113, 84–93. [Google Scholar] [CrossRef]
  7. Danilenko, S.; Markevičiūtė, J.; Šiaulys, J. Randomly stopped sums with exponential-type distributions. Nonlinear Anal-Model. 2017, 22, 793–807. [Google Scholar] [CrossRef]
  8. Danilenko, S.; Paškauskaitė, S.; Šiaulys, J. Random convolution of inhomogeneous distributions with O-exponential tail. Mod. Stoch.: Theory Appl. 2016, 3, 79–94. [Google Scholar] [CrossRef]
  9. Danilenko, S.; Šiaulys, J.; Stepanauskas, G. Closure properties of O-exponential distributions. Stat. Probab. Lett. 2018, 140, 63–70. [Google Scholar] [CrossRef]
  10. Dirma, M; Nakliuda, N. ; Šiaulys, J. Generalized moments of sums with heavy-tailed random summands. Lith. Math. J. 2023, 63, 254–271. [Google Scholar] [CrossRef]
  11. Embrechts, P.; Goldie, C.M. On the closure and factorization properties of subexponential and related distributions. J. Aust. Math. Soc. Ser. A 1980, 29, 243–256. [Google Scholar] [CrossRef]
  12. Geng, B.; Liu, Z.; Wang, S. A Kesten-type inequality for randomly weighted sums of dependent subexponential random variables with applications to risk theory. Lith. Math. J. 2023, 63, 81–91. [Google Scholar] [CrossRef]
  13. Karasevičienė, J.; Šiaulys, J. Randomly stopped sums with generalized subexponential distribution. Axioms 2023, 12, 641. [Google Scholar] [CrossRef]
  14. Karasevičienė, J.; Šiaulys, J. Randomly stopped minimum, maximum, minimum of sums and maximum of sums with generalized subexponential distributions. Axioms 2024, 13, 85. [Google Scholar] [CrossRef]
  15. Kizinevič, E.; Sprindys, J.; Šiaulys, J. Randomly stopped sums with consistently varying distributions. Mod. Stoch.: Theory Appl. 2016, 3, 165–179. [Google Scholar] [CrossRef]
  16. Konstantinides, D.; Leipus, R.; Šiaulys, J. A note on product-convolution for generalized subexponential distributions. Nonlinear Anal.-Model. 2022, 27, 11054–11067. [Google Scholar] [CrossRef]
  17. Konstantinides, D.; Leipus, R.; Šiaulys, J. On the non-closure under convolution for strong subexponential distributions. Nonlinear Anal.-Model. 2023, 28, 97–115. [Google Scholar] [CrossRef]
  18. Leipus, R.; Šiaulys, J. Closure of some heavy-tailed distribution classes under random convolution. Lith. Math. J. 2012, 52, 249–258. [Google Scholar] [CrossRef]
  19. Leipus, R.; Šiaulys, J. On the random max-closure for heavy-tailed random variables. Lith. Math. J. 2017, 57, 208–221. [Google Scholar] [CrossRef]
  20. Lin, J.; Wang, Y. New examples of heavy-tailed O-subexponenial distributions and related closure properties. Stat. Probab. Lett. 2012, 82, 427–432. [Google Scholar] [CrossRef]
  21. Ragulina, O.; Šiaulys, J. Randomly stopped minima and maxima with exponential-type distributions. Nonlinear Anal.-Model. 2019, 24, 297–313. [Google Scholar] [CrossRef]
  22. Shimura, T.; Watanabe, T. Infinite divisibility and generalized subexponentiality. Bernoulli 2005, 11, 445–469. [Google Scholar] [CrossRef]
  23. Sprindys, J.; Šiaulys, J. Regularly distributed randomly stopped sum, minimum and maximum. Nonlinear Anal.-Model. 2020, 25, 509–522. [Google Scholar] [CrossRef]
  24. Teicher, H. Moments of randomly stopped sums revisited. J. Theor. Probab. 1995, 8, 779–793. [Google Scholar] [CrossRef]
  25. Tesemnikov, P.I. On the distribution tail of the sum of the maxima of two randomly sums in the presence of heavy tails. Sib. Elektron. Mat. Izv. 2019, 16, 1785–1794. [Google Scholar] [CrossRef]
  26. Watanabe, T. Convolution equivalence and distributions of random sums. Probab. Theory Relat. Fields 2008, 142, 367–397. [Google Scholar] [CrossRef]
  27. Watanabe, T. The Wiener condition and the conjectures of Embrechts and Goldie. Ann. Probab. 2019, 47, 1221–1239. [Google Scholar] [CrossRef]
  28. Watanabe, T.; Yamamuro, K. Ratio of the tail of an infinity divisible distribution on the line to that of its Lévy measure. Electron. J. Probab. 2010, 15, 44–74. [Google Scholar] [CrossRef]
  29. Xu, H.; Foss, S.; Wang, Y. Convolution and convolution-root properties of long-tailed distributions. Extremes 2015, 18, 605–628. [Google Scholar] [CrossRef]
  30. Xu, H.; Wang, Y.; Cheng, D.; Yu, C. On the closure under infinitely divisible distribution roots. Lith. Math. J. 2022, 62, 258–287. [Google Scholar] [CrossRef]
  31. Leipus, R.; Šiaulys, J.; Konstantinides, D. Minimum of heavy-tailed random variables is not heavy-tailed. AIMS Math. 2023, 8, 13066–13072. [Google Scholar] [CrossRef]
  32. Bingham, N.H.; Goldie, C.M.; Teugels, J.L. Regular Variation. Cambridge University Press, Cambridge, 1987.
  33. Borovkov, A.A.; Borovkov, K.A. Asymptotic Analysis of Random Walks: Heavy-Tailed Distributions. Cambridge University Press, Cambridge, 2008.
  34. Embrechts, P.; Klüppelberg, C.; Mikosch, T. Modelling Extremal Events for Insurance and Finance. Springer, New York, 1997.
  35. Foss, S.; Korshunov, D.; Zachary, S. An Introduction to Heavy-Tailed and Subexponential Distributions, 2nd ed. Springer, New York, 2013.
  36. Konstantinides, D.G. Risk Theory: A Heavy Tail Approach. World Scientific, New Jersey, 2018.
  37. Leipus, R.; Šiaulys, J.; Konstantinides, D. Closure Properties for Heavy-Tailed and Related Distributions: An Overview. Springer, Cham, 2023.
  38. Nair, J.; Wierman, A.; Zwart, B. The Fundamentals of Heavy Tails: Properties, Emergence, and Estimation. Cambridge University Press, Cambridge, 2022.
  39. Liu, Y. A general treatement of alternative expectation formulae. Stat. Probab. Lett. 2020, 166, 108863. [Google Scholar] [CrossRef]
  40. Hall, P. The distribution of means for samples of sizes N drawn from a population in which the variate takes values between 0 and 1, all such values being equally probable. Biometrika 1927, 19, 240–245. [Google Scholar] [CrossRef]
  41. Irwin, J.O. On the frequency distribution of the means of samples from a population having any law of frequency with finite moments, with special reference to Pearson’s type II. Biometrika 1927, 19, 225–239. [Google Scholar] [CrossRef]
  42. Johnson, N.L.; Kotz, S.; Balakrishnan, N. Continuous Univariate Distributions, Vol 2, 2nd ed. Wiley, NY, 1995.
  43. Chen, H.; Fan, K. Tail Value-at-Risk based profiles for extreme risks and their application in distributionally robust portfolio selections. Mathematics 2023, 11, 91. [Google Scholar] [CrossRef]
  44. Mehta, M.J.; Yang, F. Portfolio optimization for extreme risks with maximum diversification: An empirical analysis. Risks 2022, 10, 101. [Google Scholar] [CrossRef]
  45. Sepanski, J.H.; Wang, X. New classes of distortion risk measures and their estimation. Risks 2023, 11, 194. [Google Scholar] [CrossRef]
  46. Mahdavi, A.; Kharazm, O.; Contreras-Reyes, J.E. On the contaminated weighted exponential distribution: Applications to modelling insurance claim data. J. Risk Financ. Manag. 2022, 15, 500. [Google Scholar] [CrossRef]
  47. Olmos, N.M.; Gómez-Déniz, F.; Venegas, O. The heavy-tailed Gleser model: Properties, estimation, and applications. Mathematics 2022, 10, 4577. [Google Scholar] [CrossRef]
  48. Klebanov, L.B.; Kuvaeva-Gudoshnikova, Y.V.; Rachev, S.T. Heavy-tailed probability distributions: Some examples of their appearance. Mathematics 2023, 11, 3094. [Google Scholar] [CrossRef]
  49. Santoro, K.I.; Gallardo,D. I.; Venegas, O.; Cortés, I.E.; Gómes, H.W. A heavy-tailed distribution based on the Lomax-Reyleigh distribution with application to medical data. Mathematics 2023, 11, 4626. [Google Scholar] [CrossRef]
  50. Markovich, N.; Vaičiulis, M. Extreme value statistics for envolving random networks. Mathematics 2023, 11, 2171. [Google Scholar] [CrossRef]
  51. Rusev, V.; Skorikov, A. The asymptotics of moments for the remaining time of heavy-tail distributions. Comput. Sci. Math. Forum 2023, 7, 52. [Google Scholar] [CrossRef]
  52. Sousa-Vieira, M.E.; Fernández-Veiga, M. Study of coded ALOHA with multi-user detection under heavy-tailed and correlated arivals. Future Internet 2023, 15, 132. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Alerts
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2025 MDPI (Basel, Switzerland) unless otherwise stated