Preprint
Article

Birth-Death Processes with Two-type Catastrophes

Altmetrics

Downloads

68

Views

14

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

07 April 2024

Posted:

08 April 2024

You are already at the latest version

Alerts
Abstract
This paper concentrates on the general birth-death processes with two different types of catastrophes. The Laplace transform of transition probability function for birth-death processes with two-type catastrophes are is successfully expressed with the Laplace transform of transition probability function of the birth-death processes without catastrophe. The first effective catastrophe occurrence time is considered. The Laplace transform of its probability density function, expectation and variance are obtained.
Keywords: 
Subject: Computer Science and Mathematics  -   Probability and Statistics

1. Introduction

Markov process is a very important branch of stochastic processes and has a very wide range of applications. Many research works can be referenced, such as, Anderson [1], Asmussen [3], Chen [9] and others.
The birth-death process is a very important class of Markov processes, which has been widely applied in finance, communications, population science and queueing theory. In the past few decades, there are many works on generalizing the ordinary birth-death process and make the theory of birth-death processes more and more fruitful. Recently, the stochastic models with catastrophe have aroused much research interest. For example, Chen Zhang and Liu [5], Economou and Fakinos [12], Pakes [18] considered the instantaneous distribution of continuous-time Markov chains with catastrophes. Chen and Renshaw [7,8] analyzed the effect of catastrophes on the M / M / 1 queuing model. Zhang and Li [20] extended these results to the M / M / c queuing model with catastrophes. Li and Zhang [17] further considered the effect of catastrophes on the M X / M / c queuing model. Di Crescenzo et al [10] discussed the probability distribution and the relevant numerical characteristics of the first occurrence time of an effective disaster for general birth-death process with catastrophes. Other related works can be seen from Artalejo [2], Bayer and Boxma [4], Chen, Pollett, Li and Zhang [6], Dudin and Karolik [11], Gelenbe [13], Gelenbe, Glynn and Sigman [14], Jain and Sigman [15],
In this paper, we mainly consider the property of the first occurrence time of effective catastrophe for the general birth-death processes with two-type catastrophes.
We start our discussion by presenting the infinitesimal generator, i.e., the so called q-matrix.
Definition 1.
Let { N t : t 0 } be a continuous-time Markov chain on state space Z + = { 0 , 1 , 2 , } , if its q-matrix Q = ( q i j : i , j Z + ) is by
Q = Q ^ + Q d ,
where Q ^ = ( q ^ i j : i , j Z + ) and Q d = ( q i j ( d ) : i , j Z + ) are given by
q ^ i j = λ i , i 0 , j = i + 1 , μ i , i 1 , j = i 1 , λ 0 , i = j = 0 , ω i , i = j 1 , 0 , otherwise .
and
q i j ( d ) = β , i = 0 o r i 2 , j = 1 , α , i 1 , j = 0 , β , i = j = 0 , α , i = j = 1 , γ , i = j 2 , 0 , otherwise .
with α , β 0 , λ i > 0 ( i 0 ) , μ i > 0 ( i 1 ) and ω i = λ i + μ i ( i 1 ) , γ = α + β , respectively.
Then { N t : t 0 } is called a birth-death processes with two-type catastrophes. Its probability transition function is denoted by P ( t ) = ( p i j ( t ) : i , j Z + ) and the corresponding resolvent is denoted by Π ( λ ) = ( π j , n ( λ ) : j , n Z + ) .
Remark 1.
By Definition 1, α and β describe the rates of catastrophes. We called them α-catastrophe and β-catastrophe, respectively. That is, α-catastrophe kills all the individuals in the system, while β-catastrophe partially kills the individuals in the system with only one individual left. If α = β = 0 , i.e., there is no catastrophe, then { N t : t 0 } degenerates into an ordinary birth-death process, which is denoted by { N ^ ( t ) : t 0 } , its q-matrix is denoted by Q ^ . The probability transition function of { N ^ t : t 0 } is denoted by P ^ ( t ) = ( p ^ i j ( t ) : i , j Z + ) and the corresponding resolvent is denoted by Π ^ ( λ ) = ( π ^ j , n ( λ ) : j , n Z + ) .

2. Probability Transition Function

From Definition 1, we see that a catastrophe may reduce the system state to 0 or 1. However, since natural death rate μ 1 , μ 2 > 0 , when the system state transfer to 0 from 1 or transfer to 1 from 2, it is difficult to distinguish whether it was a catastrophe or a natural death. Therefore, it is important to discuss such effective catastrophe. For this purpose, we first construct the relationship of P ( t ) and P ^ ( t ) (or equivalently, Π ( λ ) and Π ^ ( λ ) ).
Lemma 1. (i) P ( t ) = ( p j , n ( t ) : j , n Z + ) satisfies the following Kolmogorov forward equations: for any j , n Z + and t 0 ,
p j , 0 ( t ) = ( λ 0 + γ ) p j , 0 ( t ) + μ 1 p j , 1 ( t ) + α , p j , 1 ( t ) = λ 0 p j , 0 ( t ) ( ω 1 + γ ) p j , 1 ( t ) + μ 2 p j , 2 ( t ) + β , p j , n ( t ) = λ n 1 p j , n 1 ( t ) ( ω n + γ ) p j , n ( t ) + μ n + 1 p j , n + 1 ( t ) , n 2 ,
or equivalently, in the resolvent version,
( λ + λ 0 + γ ) π j , 0 ( λ ) δ j , 0 = μ 1 π j , 1 ( λ ) + α λ , ( λ + ω 1 + γ ) π j , 1 ( λ ) δ j , 1 = λ 0 π j , 0 ( λ ) + μ 2 π j , 2 ( λ ) + β λ , ( λ + ω n + γ ) π j , n ( λ ) δ j , n = λ n 1 π j , n 1 ( λ ) + μ n + 1 π j , n + 1 ( λ ) , n 2 .
(ii) P ^ ( t ) = ( p ^ j , n ( t ) : j , n Z + ) satisfies the following Kolmogorov forward equations: for any j , n Z + and t 0 ,
p ^ j , 0 ( t ) = λ 0 p ^ j , 0 ( t ) + μ 1 p ^ j , 1 ( t ) , p ^ j , n ( t ) = λ n 1 p ^ j , n 1 ( t ) ( λ n + μ n ) p ^ j , n ( t ) + μ n + 1 p ^ j , n + 1 ( t ) , n 1 .
or equivalently, in the resolvent version,
( λ + λ 0 ) π ^ j , 0 ( λ ) δ j , 0 = μ 1 π ^ j , 1 ( λ ) , ( λ + λ n + μ n ) π ^ j , n ( λ ) δ j , n = λ n 1 π ^ j , n 1 ( λ ) + μ n + 1 π ^ j , n + 1 ( λ ) , n 1 .
Proof. (i) By Kolmogorov forward equations and the honesty of P ( t ) , we know that
p j , 0 ( t ) = ( λ 0 + β ) p j , 0 ( t ) + ( μ 1 + α ) p j , 1 ( t ) + k = 2 α p j , k ( t ) = ( λ 0 + β ) p j , 0 ( t ) + μ 1 p j , 1 ( t ) + k = 1 α p j , k ( t ) = ( λ 0 + β ) p j , 0 ( t ) + μ 1 p j , 1 ( t ) + α ( 1 p j , 0 ( t ) ) = ( λ 0 + γ ) p j , 0 ( t ) + μ 1 p j , 1 ( t ) + α .
and
p j , 1 ( t ) = ( λ 0 + β ) p j , 0 ( t ) ( λ 1 + μ 1 + α ) p j , 1 ( t ) + ( μ 2 + β ) p j , 2 ( t ) + k = 3 β p j , k ( t ) = λ 0 p j , 0 ( t ) ( ω 1 + α ) p j , 1 ( t ) + μ 2 p j , 2 ( t ) + β ( 1 p j , 1 ( t ) ) = λ 0 p j , 0 ( t ) ( ω 1 + γ ) p j , 1 ( t ) + μ 2 p j , 2 ( t ) + β .
The other equalities of (i) and (ii) follow directly from Kolmogorov forward equations and Laplace transform. The proof is complete. □
The following theorem plays an important role in the later discussion, it reveals the relationship of P ( t ) and P ^ ( t ) (or equivalently, Π ( λ ) and Π ^ ( λ ) ).
Theorem 1.
For any j , n Z + , we have
p j , n ( t ) = e γ t p ^ j , n ( t ) + α 0 t e γ s p ^ 0 , n ( s ) d s + β 0 t e γ s p ^ 1 , n ( s ) d s
or equivalently in resolvent version,
π j , n ( λ ) = π ^ j , n ( λ + γ ) + 1 λ · [ α π ^ 0 , n ( λ + γ ) + β π ^ 1 , n ( λ + γ ) ]
Proof. 
We first assume α = 0 . The corresponding process is denoted by N ˜ t and its probability transition function is denoted by P ˜ ( t ) = ( p ˜ j , n ( t ) : j , n Z + ) . Denote { A t : t 0 } = { N ^ t : t 0 } . Let { K t : t 0 } be a Poisson process with parameter β , which is independent of { A t : t 0 } , note that { K t : t 0 } can be viewed as a catastrophe flow. Let l ( t ) be the time until the first catastrophe before time t. Then l ( t ) has the truncated exponential law
P ( l ( t ) u ) = 1 e β u I [ 0 , t ) ( u ) .
Denote { A t ( 0 ) : t 0 } : = { A t : t 0 } . Let { A t ( n ) : t 0 } n 1 be an independent sequence copies of { A t ( 0 ) : t 0 } but with A 0 ( n ) = 1 . Define { R t : t 0 } by
R t = A l ( t ) ( K t ) , t 0 .
Then, { R t : t 0 } is a continuous-time Markov chain, it evolves like A t ( 0 ) , at the first catastrophe time, it jumps to state 1, and then evolves like A t ( 1 ) , at the next catastrophe time, it jumps to state 1 again, and so on. Let P ¯ ( t ) = ( p ¯ j n ( t ) : j , n Z + ) ) be the probability transition function of { R t : t 0 } . Then
p ¯ j n ( t ) = P ( R t = n | R 0 = j ) = P j ( R t = n ) = E j [ I { n } ( R t ) ] = E j [ E j [ I { n } ( A l ( t ) ( K t ) ) | K t , l ( t ) ] ] ,
where P j = P ( · | R 0 = j ) and E j is the mathematical expectation under P j . Denote G ( K t , l ( t ) ) : = E j [ I { n } ( A l ( t ) ( K t ) ) | K t , l ( t ) ] for a moment. Then the above equality equals to
E j [ G ( K t , l ( t ) ) ] = E j [ E j [ G ( K t , l ( t ) ) | l ( t ) ] ] = P j ( l ( t ) = t ) E j [ G ( K t , l ( t ) ) | l ( t ) = t ] + β ξ 0 t e β s E j [ G ( K t , l ( t ) ) | l ( t ) = s ] d s .
Since l ( t ) = t K t = 0 and R 0 = j A 0 = j , we have
P j ( l ( t ) = t ) = P j ( K t = 0 ) = e β t
and
E j [ G ( K t , l ( t ) ) | l ( t ) = t ] = E j [ I { n } ( A t ( 0 ) ) ] = E j [ I { n } ( A t ) ] = p ^ j n ( t ) .
If s < t , then
E j [ G ( K t , l ( t ) ) | l ( t ) = s ] = k = 1 P j ( K t = k | l ( t ) = s ) G ( k , s ) = k = 1 P j ( K t = k | l ( t ) = s ) E j [ I { n } ( A l ( t ) ( K t ) ) | K t = k , l ( t ) = s ] = k = 1 P j ( K t = k | l ( t ) = s ) E j [ I { n } ( A s ( k ) ) ] = k = 1 P j ( K t = k | l ( t ) = s ) E [ I { n } ( A s ( k ) ) | A 0 = j , A 0 ( k ) = 1 ] = k = 1 P j ( K t = k | l ( t ) = s ) P ( A s ( k ) = n | A 0 = j , A 0 ( k ) = 1 ] = k = 1 P j ( K t = k | l ( t ) = s ) P ( A s ( k ) = n | A 0 ( k ) = 1 ] = k = 1 P j ( K t = k | l ( t ) = s ) P ( A s = n | A 0 = 1 ] = p ^ 1 , n ( s ) .
Therefore,
p ¯ j , n ( t ) = e β t p ^ j , n ( t ) + β ξ 0 t e β s p ^ 1 , n ( s ) d s .
It is easy to check that p ¯ j , n ( 0 ) = p ˜ j , n ( 0 ) . This implies that R t and N ˜ t are same in sense of distribution. Hence,
p ˜ j , n ( t ) = e β t p ^ j , n ( t ) + β 0 t e β s p ^ 1 , n ( s ) d s .
Now consider the general case α > 0 . Denote { A ˜ t : t 0 } : = { N ˜ t : t 0 } . Let { K ˜ t : t 0 } be a Poisson process with parameter α ξ , which is independent of { A ˜ t : t 0 } . { K ˜ t : t 0 } can be viewed as a catastrophe flow with parameter α . Let l ˜ ( t ) be the time until the first catastrophe before time t. Then l ( t ) has the truncated exponential law
P ( l ˜ ( t ) u ) = 1 e α u I [ 0 , t ) ( u ) .
Denote { A ˜ t ( 0 ) : t 0 } : = { A ˜ t : t 0 } . Let { A ˜ t ( n ) : t 0 } n 1 be an independent sequence copies of { A ˜ t ( 0 ) : t 0 } but with A ˜ 0 ( n ) = 0 ( n 1 ) . Define { R ˜ t : t 0 } by
R ˜ t = A ˜ l ˜ ( t ) ( K ˜ t ) , t 0 .
Let P ˇ ( t ) = ( p ˇ j , n ( t ) : j , n Z + ) be the probability transition function of { R ˜ t : t 0 } . By a similar argument as above, we know that
p ˇ j , n ( t ) = e α t p ¯ j , n ( t ) + α 0 t e α s p ¯ 0 , n ( s ) d s .
By (8)
p ˇ j , n ( t ) = e α t [ e β t p ^ j , n ( t ) + β 0 t e β s p ^ 1 , n ( s ) d s ] + α 0 t e α s [ e β s p ^ 0 , n ( s ) + β 0 s e β u p ^ 1 , n ( u ) d u ] d s = e ( α + β ) t p ^ j , n ( t ) + α 0 t e ( α + β ) s p ^ 0 , n ( s ) d s + β 0 t e ( α + β ) s p ^ 1 , n ( s ) d s .
It is easy to check that p ˇ j , n ( 0 ) = p j , n ( 0 ) . This implies that R ˜ t and N t are same in sense of distribution. Hence,
p j , n ( t ) = e ( α + β ) t p ^ j , n ( t ) + α 0 t e ( α + β ) s p ^ 0 , n ( s ) d s + β 0 t e ( α + β ) s p ^ 1 , n ( s ) d s .
(6) is proved. Taking Laplace transform on (6) implies (7). The proof is complete. □

3. The First Occurrence Time of Effective Catastrophe

We now consider the first effective catastrophe of { N t : t 0 } . Let C j is the first occurrence time of effective catastrophe for { N t : t 0 } starting from state j. The probability density function of C j is denoted by d j ( t ) . Let C j , 0 and C j , 1 be the first occurrence time of effective α -catastrophe and effective β -catastrophe, respectively. It is obvious that C j = C j , 0 C j , 1 .
The property of C j , 0 or C j , 1 can be similarly discussed as in Di Crescenzo et al [10]. In this paper, we mainly consider the property of C j and the probabilities P ( C j t , C j , 0 < C j , 1 ) and P ( C j t , C j , 1 < C j , 0 ) . For this purpose, we construct a new process { M t : t 0 } such that { M t : t 0 } coincides with { N t : t 0 } until the occurrence of catastrophe, but { M t : t 0 } enter into an absorbing state 1 if the first effective catastrophe is β -type and enter into another absorbing state 2 if the first effective catastrophe is α -type. Therefore the state space of { M t : t 0 } is S = { 2 , 1 , 0 , 1 , } and its q-matrix Q ˜ = ( q ˜ j n : j , n S ) is given by
q ˜ i j = λ i , i 0 , j = i + 1 , μ i , i 1 , j = i 1 , α , i 1 , j = 2 , β , i = 0 , j = 1 , β , i 2 , j = 1 , ( λ 0 + β ) , i = j = 0 , ( ω 1 + α ) , i = j = 1 , ( ω i + γ ) , i = j 2 , 0 , otherwise .
Let H ( t ) = ( h j , n ( t ) : j , n S ) and Φ ( λ ) = ( ϕ j , n ( λ ) : j , n S ) be the Q ˜ -transition function and Q ˜ -resolvent.
Lemma 2.
For any j 0 , we have
h j , 2 ( t ) = α ( 1 h j , 2 ( t ) h j , 1 ( t ) h j , 0 ( t ) ) , h j , 1 ( t ) = β ( 1 h j , 2 ( t ) h j , 1 ( t ) h j , 1 ( t ) ) , h j , 0 ( t ) = ( λ 0 + β ) h j , 0 ( t ) + μ 1 h j , 1 ( t ) , h j , 1 ( t ) = λ 0 h j , 0 ( t ) ( ω 1 + α ) h j , 1 ( t ) + μ 2 h j , 2 ( t ) , h j , n ( t ) = λ n 1 h j , n 1 ( t ) ( ω n + γ ) h j , n ( t ) + μ n + 1 h j , n + 1 ( t ) , n 2 ,
or equivalently, in resolvent version,
λ ϕ j , 2 ( λ ) = α ( 1 λ ϕ j , 2 ( λ ) ϕ j , 1 ( λ ) ϕ j , 0 ( λ ) ) , λ ϕ j , 1 ( λ ) = β ( 1 λ ϕ j , 2 ( λ ) ϕ j , 1 ( λ ) ϕ j , 1 ( λ ) ) , ( λ + λ 0 + β ) ϕ j , 0 ( λ ) δ j , 0 = μ 1 ϕ j , 1 ( λ ) , ( λ + ω 1 + α ) ϕ j , 1 ( λ ) δ j , 1 = λ 0 ϕ j , 0 ( λ ) + μ 2 ϕ j , 2 ( λ ) , ( λ + ω n + γ ) ϕ j , n ( λ ) δ j , n = λ n 1 ϕ j , n 1 ( λ ) + μ n + 1 ϕ j , n + 1 ( λ ) , n 2 .
Proof. 
By Kolmogorov forward equation,
h j , 2 ( t ) = k = 1 α h j , k ( t ) = α ( 1 h j , 2 ( t ) h j , 1 ( t ) h j , 0 ( t ) ) .
h j , 1 ( t ) = β h j , 0 ( t ) + k = 2 β h j , k ( t ) = β ( 1 h j , 2 ( t ) h j , 1 ( t ) h j , 1 ( t ) ) .
The other equalities of (9) follow directly from Kolmogorov forward equations and (10) follows from the Laplace transform of (9). The proof is complete. □
We now investigate the relationship of Φ ( λ ) and Π ( λ ) . For this purpose, define
A i j ( λ ) = 1 λ π i , j ( λ ) , i , j 0
and
H ( λ ) = λ 1 { [ λ + α A 00 ( λ ) ] [ λ + β A 11 ( λ ) ] α β A 10 ( λ ) A 01 ( λ ) } .
Theorem 2.
Let Φ ( λ ) = ( ϕ j , n ( λ ) : j , n S ) be the Q ˜ -resolvent. Then
ϕ 0 , n ( λ ) = ( λ + β A 11 ( λ ) ) π 0 , n ( λ ) β A 01 ( λ ) π 1 , n ( λ ) H ( λ ) , n 0 ,
ϕ 1 , n ( λ ) = α A 10 ( λ ) π 0 , n ( λ ) + ( λ + α A 00 ( λ ) ) π 1 , n ( λ ) H ( λ ) , n 0
and
ϕ j , n ( λ ) = π j , n ( λ ) + F j ( λ ) π 0 , n ( λ ) + G j ( λ ) π 1 , n ( λ ) , j 2 , n 0 ,
where
F j ( λ ) = α β A 10 ( λ ) A j 1 ( λ ) α ( λ + β A 11 ( λ ) ) A j 0 ( λ ) λ H ( λ )
and
G j ( λ ) = α β A 01 ( λ ) A j 0 ( λ ) β ( λ + α A 00 ( λ ) ) A j 1 ( λ ) λ H ( λ )
with ( π j , n ( λ ) : j , n 0 ) being given by ( ) .
Proof. 
By (10) with j = 0 , 1 ,
( λ + λ 0 + β ) ϕ 0 , 0 ( λ ) 1 = μ 1 ϕ 0 , 1 ( λ ) , ( λ + ω 1 + α ) ϕ 0 , 1 ( λ ) = λ 0 ϕ 0 , 0 ( λ ) + μ 2 ϕ 0 , 2 ( λ ) , ( λ + ω n + γ ) ϕ 0 , n ( λ ) = λ n 1 ϕ 0 , n 1 ( λ ) + μ n + 1 ϕ 0 , n + 1 ( λ ) , n 2 ,
( λ + λ 0 + β ) ϕ 1 , 0 ( λ ) = μ 1 ϕ 1 , 1 ( λ ) , ( λ + ω 1 + α ) ϕ 1 , 1 ( λ ) 1 = λ 0 ϕ 1 , 0 ( λ ) + μ 2 ϕ 1 , 2 ( λ ) , ( λ + ω n + γ ) ϕ 1 , n ( λ ) = λ n 1 ϕ 1 , n 1 ( λ ) + μ n + 1 ϕ 1 , n + 1 ( λ ) , n 2
and by (5) with j = 0 , 1 ,
( λ + λ 0 + γ ) π 0 , 0 ( λ ) 1 = μ 1 π 0 , 1 ( λ ) + α λ , ( λ + ω 1 + γ ) π 0 , 1 ( λ ) = λ 0 π 0 , 0 ( λ ) + μ 2 π 0 , 2 ( λ ) + β λ , ( λ + ω n + γ ) π 0 , n ( λ ) = λ n 1 π 0 , n 1 ( λ ) + μ n + 1 π 0 , n + 1 ( λ ) , n 2 .
( λ + λ 0 + γ ) π 1 , 0 ( λ ) = μ 1 π 1 , 1 ( λ ) + α λ , ( λ + ω 1 + γ ) π 1 , 1 ( λ ) 1 = λ 0 π 1 , 0 ( λ ) + μ 2 π 1 , 2 ( λ ) + β λ , ( λ + ω n + γ ) π 1 , n ( λ ) = λ n 1 π 1 , n 1 ( λ ) + μ n + 1 π 1 , n + 1 ( λ ) , n 2 .
Let
ϕ 0 , n ( λ ) = A ( λ ) π 0 , n ( λ ) + B ( λ ) π 1 , n ( λ ) , n 0 .
Substitute (22) into (18) and use (20), we have
( λ + α A 00 ( λ ) ) A ( λ ) + α A 10 ( λ ) B ( λ ) = λ β A 01 ( λ ) A ( λ ) + ( λ + β A 11 ( λ ) ) B ( λ ) = 0 .
Indeed, by the first equality of (18),
( λ + λ 0 + β ) [ A ( λ ) π 0 , 0 ( λ ) + B ( λ ) π 1 , 0 ( λ ) ] 1 = μ 1 [ A ( λ ) π 0 , 1 ( λ ) + B ( λ ) π 1 , 1 ( λ ) ]
i.e.,
A ( λ ) [ ( λ + λ 0 + β ) π 0 , 0 ( λ ) μ 1 π 0 , 1 ( λ ) ] + B ( λ ) [ ( λ + λ 0 + β ) π 1 , 0 ( λ ) μ 1 π 1 , 1 ( λ ) ] = 1
It follows from the first equality of (20) and the first equality of (21) that
( λ + α A 00 ( λ ) ) A ( λ ) + α A 10 ( λ ) B ( λ ) = λ .
By the second equality of (18),
( λ + ω 1 + α ) [ A ( λ ) π 0 , 1 ( λ ) + B ( λ ) π 1 , 1 ( λ ) ] = λ 0 [ A ( λ ) π 0 , 0 ( λ ) + B ( λ ) π 1 , 0 ( λ ) ] + μ 2 [ A ( λ ) π 0 , 2 ( λ ) + B ( λ ) π 1 , 2 ( λ ) ]
i.e.,
A ( λ ) [ ( λ + ω 1 + α ) π 0 , 1 ( λ ) λ 0 π 0 , 0 ( λ ) μ 2 π 0 , 2 ( λ ) ] + B ( λ ) [ ( λ + ω 1 + α ) π 1 , 1 ( λ ) λ 0 π 1 , 0 ( λ ) μ 2 π 1 , 2 ( λ ) ] = 0
It follows from the second equality of (20) and the second equality of (21) that
β A 01 ( λ ) A ( λ ) + ( λ + β A 11 ( λ ) ) B ( λ ) = 0 .
Therefore, (23) holds. It follows from (23) that
A ( λ ) = λ + β A 11 ( λ ) H ( λ ) a n d B ( λ ) = β A 01 ( λ ) H ( λ ) .
The other equalities of (18) also hold.
Let
ϕ 1 , n ( λ ) = C ( λ ) π 0 , n ( λ ) + D ( λ ) π 1 , n ( λ ) , n 0 .
Substitute (24) into (19) and use (21), we have
( λ + α α λ π 0 , 0 ( λ ) ) C ( λ ) + α ( 1 λ π 1 , 0 ( λ ) ) D ( λ ) = 0 β ( 1 λ π 0 , 1 ( λ ) ) C ( λ ) + ( λ + β β λ π 1 , 1 ( λ ) ) D ( λ ) = λ .
Indeed, by the second equality of (19),
( λ + ω 1 + α ) [ C ( λ ) π 0 , 1 ( λ ) + D ( λ ) π 1 , 1 ( λ ) ] 1 = λ 0 [ C ( λ ) π 0 , 0 ( λ ) + D ( λ ) π 1 , 0 ( λ ) ] + μ 2 [ C ( λ ) π 0 , 2 ( λ ) + D ( λ ) π 1 , 2 ( λ ) ]
i.e.,
[ ( λ + ω 1 + α ) π 0 , 1 ( λ ) λ 0 π 0 , 0 ( λ ) μ 2 π 0 , 2 ( λ ) ] C ( λ ) + [ ( λ + ω 1 + α ) π 1 , 1 ( λ ) λ 0 π 1 , 0 ( λ ) μ 2 π 1 , 2 ( λ ) ] D ( λ ) = 1
It follows from the second equality of (20) and the second equality of (21) that
β A 01 ( λ ) C ( λ ) + ( λ + β A 11 ( λ ) ) D ( λ ) = λ .
By the first equality of (19),
( λ + λ 0 + β ) [ C ( λ ) π 0 , 0 ( λ ) + D ( λ ) π 1 , 0 ( λ ) ] = μ 1 [ C ( λ ) π 0 , 1 ( λ ) + D ( λ ) π 1 , 1 ( λ ) ]
i.e.,
[ ( λ + λ 0 + β ) π 0 , 0 ( λ ) μ 1 π 0 , 1 ( λ ) ] C ( λ ) + B ( λ ) [ ( λ + λ 0 + β ) π 1 , 0 ( λ ) μ 1 π 1 , 1 ( λ ) ] = 0
It follows from the first equality of (20) and the first equality of (21) that
( λ + α A 00 ( λ ) ) C ( λ ) + α A 10 ( λ ) ) D ( λ ) = 0 .
Therefore, (25) holds. It follows from (25) that
C ( λ ) = α A 10 ( λ ) H ( λ ) a n d D ( λ ) = λ + α A 00 ( λ ) H ( λ ) .
The other equalities of (19) also hold.
By (10) with j 2 ,
( λ + λ 0 + β ) ϕ j , 0 ( λ ) = μ 1 ϕ j , 1 ( λ ) , ( λ + ω 1 + α ) ϕ j , 1 ( λ ) = λ 0 ϕ j , 0 ( λ ) + μ 2 ϕ j , 2 ( λ ) , ( λ + ω n + γ ) ϕ j , n ( λ ) δ j , n = λ n 1 ϕ j , n 1 ( λ ) + μ n + 1 ϕ j , n + 1 ( λ ) , n 2
and by (5) with j 2 ,
( λ + λ 0 + γ ) π j , 0 ( λ ) = μ 1 π j , 1 ( λ ) + α λ , ( λ + ω 1 + γ ) π j , 1 ( λ ) = λ 0 π j , 0 ( λ ) + μ 2 π j , 2 ( λ ) + β λ , ( λ + ω n + γ ) π j , n ( λ ) δ j , n = λ n 1 π j , n 1 ( λ ) + μ n + 1 π j , n + 1 ( λ ) , n 2 .
Let
ϕ j , n ( λ ) = D j ( λ ) π j , n ( λ ) + F j ( λ ) π 0 , n ( λ ) + G j ( λ ) π 1 , n ( λ ) .
Substitute (28) into the last equality of (26), we have
D j ( λ ) [ ( λ + ω n + γ ) π j , n ( λ ) λ n 1 π j , n 1 ( λ ) μ n + 1 π j , n + 1 ( λ ) ] δ j , n + F j ( λ ) [ ( λ + ω n + γ ) π 0 , n ( λ ) λ n 1 π 0 , n 1 ( λ ) μ n + 1 π 0 , n + 1 ( λ ) ] + G j ( λ ) [ ( λ + ω n + γ ) π 1 , n ( λ ) λ n 1 π 1 , n 1 ( λ ) μ n + 1 π 1 , n + 1 ( λ ) ] = 0 , n 2 .
By the last equalities of (20), (21) and (27), we have D j ( λ ) δ j , n = δ j , n for n 2 and hence D j ( λ ) = 1 .
Substitute (28) into the first and second equality of (26) and use (20), (21), we have
( λ + α A 00 ( λ ) ) F j ( λ ) + α A 10 ( λ ) G j ( λ ) = α λ π j , 0 ( λ ) α , β A 01 ( λ ) F j ( λ ) + ( λ + β A 11 ( λ ) ) G j ( λ ) = β λ π j , 1 ( λ ) β .
Solving (30) yields (16) and (17). The proof is complete. □
By Theorem 1, we know that
λ π j , n ( λ ) = λ π ^ j , n ( λ + γ ) + α π ^ 0 , n ( λ + γ ) + β π ^ 1 , n ( λ + γ ) .
Denote
a n ( λ ) = 1 α π ^ 0 , n ( λ + γ ) β π ^ 1 , n ( λ + γ ) , n 0 .
Then, A j n ( λ ) can be represented as
A j n ( λ ) = a n ( λ ) λ π ^ j , n ( λ + γ ) ,
Hence, by some algebra, H ( λ ) can be represented as
H ( λ ) = α β [ a 0 ( λ ) π ^ 0 , 1 ( λ + γ ) + a 1 ( λ ) π ^ 1 , 0 ( λ + γ ) λ π ^ 1 , 0 ( λ + γ ) π ^ 0 , 1 ( λ + γ ) ] + α a 0 ( λ ) β ( λ ) + β a 1 ( λ ) α ( λ ) + λ α ( λ ) β ( λ ) ,
where α ( λ ) = 1 α π ^ 0 , 0 ( λ + γ ) , β ( λ ) = 1 β π ^ 1 , 1 ( λ + γ ) . Indeed,
λ H ( λ ) = ( α a 0 ( λ ) + λ α ( λ ) ) ( β a 1 ( λ ) + λ β ( λ ) ) α β ( a 0 ( λ ) λ π ^ 1 , 0 ( λ + γ ) ) ( a 1 ( λ ) λ π ^ 0 , 1 ( λ + γ ) ) = α β a 0 ( λ ) a 1 ( λ ) + α λ a 0 ( λ ) β ( λ ) + β λ a 1 ( λ ) α ( λ ) + λ 2 α ( λ ) β ( λ ) α β a 0 ( λ ) a 1 ( λ ) + α β a 0 ( λ ) λ π ^ 0 , 1 ( λ + γ ) + α β a 1 ( λ ) λ π ^ 1 , 0 ( λ + γ ) α β λ 2 π ^ 1 , 0 ( λ + γ ) π ^ 0 , 1 ( λ + γ ) = λ α β [ a 0 ( λ ) π ^ 0 , 1 ( λ + γ ) + a 1 ( λ ) π ^ 1 , 0 ( λ + γ ) λ π ^ 1 , 0 ( λ + γ ) π ^ 0 , 1 ( λ + γ ) ] + λ [ α a 0 ( λ ) β ( λ ) + β a 1 ( λ ) α ( λ ) + λ α ( λ ) β ( λ ) ]
which implies (33).
Theorem 3.
Let Φ ( λ ) = ( ϕ j , n ( λ ) : j , n S ) be the Q ˜ -resolvent and Π ^ ( λ ) = ( π ^ j , n ( λ ) : j , n Z + ) be the Q ^ -resolvent. Then,
ϕ j , n ( λ ) = π ^ j , n ( λ + γ ) + U j ( λ ) π ^ 0 , n ( λ + γ ) + V j ( λ ) π ^ 1 , n ( λ + γ ) H ( λ ) , j , n 0 ,
where
U j ( λ ) = α ( λ + α + β ) β ( λ ) π ^ j , 0 ( λ + γ ) + α β ( λ + α + β ) π ^ 1 , 0 ( λ + γ ) π ^ j , 1 ( λ + γ ) ,
and
V j ( λ ) = β ( λ + α + β ) α ( λ ) π ^ j , 1 ( λ + γ ) + α β ( λ + α + β ) π ^ 0 , 1 ( λ + γ ) π ^ j , 0 ( λ + γ ) .
Proof. 
By (11), (12) and Theorem 1, we know that for any j , n 0 ,
A j n ( λ ) = 1 λ π ^ j , n ( λ + γ ) α π ^ 0 , n ( λ + γ ) β π ^ 1 , n ( λ + γ ) = λ [ π ^ 0 , n ( λ + γ ) π ^ j , n ( λ + γ ) ] + A 0 n ( λ ) = λ [ π ^ 1 , n ( λ + γ ) π ^ j , n ( λ + γ ) ] + A 1 n ( λ ) .
Note that the right hand sides of (16) and (17) are well defined, we can define F j ( λ ) and G j ( λ ) for j = 0 , 1 . Hence, it follows from Theorem 2 that for any j 0 ,
λ H ( λ ) F j ( λ ) = α β A 10 ( λ ) A 01 ( λ ) + α β λ A 10 ( λ ) [ π ^ 0 , 1 ( λ + γ ) π ^ j , 1 ( λ + γ ) ] α ( λ + β A 11 ( λ ) ) A 00 ( λ ) α λ ( λ + β A 11 ( λ ) ) [ π ^ 0 , 0 ( λ + γ ) π ^ j , 0 ( λ + γ ) ]
and
λ H ( λ ) G j ( λ ) = β λ A 01 ( λ ) + α β λ A 01 ( λ ) [ π ^ 0 , 0 ( λ + γ ) π ^ j , 0 ( λ + γ ) ] β λ ( λ + α A 00 ( λ ) ) [ π ^ 0 , 1 ( λ + γ ) π ^ j , 1 ( λ + γ ) ] .
Therefore, by some algebra, one can get
λ H ( λ ) [ F j ( λ ) + α λ ( 1 + F j ( λ ) + G j ( λ ) ) ] = α λ ( λ + α + β ) ( 1 β π ^ 1 , 1 ( λ + γ ) ) π ^ j , 0 ( λ + γ ) + α β λ ( λ + α + β ) π ^ 1 , 0 ( λ + γ ) π ^ j , 1 ( λ + γ ) = : λ U j ( λ ) , j 0 .
Similarly,
λ H ( λ ) [ G j ( λ ) + β λ ( 1 + F j ( λ ) + G j ( λ ) ) ] = β λ ( λ + α + β ) ( 1 α π ^ 0 , 0 ( λ + γ ) ) π ^ j , 1 ( λ + γ ) + α β λ ( λ + α + β ) π ^ 0 , 1 ( λ + γ ) π ^ j , 0 ( λ + γ ) = : λ V j ( λ ) , j 0 .
By Theorems 1 and 2, for any j 2 , n 0 ,
ϕ j , n ( λ ) = π j , n ( λ ) + F j ( λ ) π 0 , n ( λ ) + G j ( λ ) π 1 , n ( λ ) = π ^ j , n ( λ + γ ) + α π ^ 0 , n ( λ + γ ) + β π ^ 1 , n ( λ + γ ) λ + F j ( λ ) [ π ^ 0 , n ( λ + γ ) + α π ^ 0 , n ( λ + γ ) + β π ^ 1 , n ( λ + γ ) λ ] + G j ( λ ) [ π ^ 1 , n ( λ + γ ) + α π ^ 0 , n ( λ + γ ) + β π ^ 1 , n ( λ + γ ) λ ] = π ^ j , n ( λ + γ ) + [ F j ( λ ) + α λ ( 1 + F j ( λ ) + G j ( λ ) ) ] · π ^ 0 , n ( λ + γ ) + [ G j ( λ ) + β λ ( 1 + F j ( λ ) + G j ( λ ) ) ] · π ^ 1 , n ( λ + γ ) ,
where F j ( λ ) and G j ( λ ) are given in (16) and (17). By (35) and (36), we know (34) holds for j 2 , n 0 .
As for j = 0 , by (13) and Theorem 1,
ϕ 0 , n ( λ ) = λ ( λ + β A 11 ( λ ) ) π 0 , n ( λ ) β λ A 01 ( λ ) π 1 , n ( λ ) λ H ( λ ) = ( λ + β A 11 ( λ ) ) [ ( λ + α ) π ^ 0 , n ( λ + γ ) + β π ^ 1 , n ( λ + γ ) ] β A 01 ( λ ) [ ( λ + β ) π ^ 1 , n ( λ + γ ) + α π ^ 0 , n ( λ + γ ) ] λ H ( λ ) = ( λ + α ) ( λ + β A 11 ( λ ) ) α β A 01 ( λ ) λ H ( λ ) π ^ 0 , n ( λ + γ ) + β [ λ + β A 11 ( λ ) ( λ + β ) A 01 ( λ ) ] λ H ( λ ) π ^ 1 , n ( λ + γ ) = π ^ 0 , n ( λ + γ ) + ( λ + α ) ( λ + β A 11 ( λ ) ) α β A 01 ( λ ) λ H ( λ ) λ H ( λ ) π ^ 0 , n ( λ + γ ) + β [ λ + β A 11 ( λ ) ( λ + β ) A 01 ( λ ) ] λ H ( λ ) π ^ 1 , n ( λ + γ ) .
By the definition of H ( λ ) ,
( λ + α ) ( λ + β A 11 ( λ ) ) α β A 01 ( λ ) λ H ( λ ) = ( λ + α ) ( λ + β A 11 ( λ ) ) α β A 01 ( λ ) ( λ + α A 00 ( λ ) ) ( λ + β A 11 ( λ ) ) + α β A 10 ( λ ) A 01 ( λ ) = α ( λ + β A 11 ( λ ) ) ( 1 A 00 ( λ ) ) α β A 01 ( λ ) ( 1 A 10 ( λ ) ) .
On the other hand, by some algebra, one can see that
λ U 0 ( λ ) = λ H ( λ ) [ F 0 ( λ ) + α λ ( 1 + F 0 ( λ ) + G 0 ( λ ) ) ] = α β A 10 ( λ ) A 01 ( λ ) ) α ( λ + β A 11 ( λ ) ) A 00 ( λ ) ) + α ( λ + β A 11 ( λ ) ) α β A 01 ( λ ) = α ( λ + β A 11 ( λ ) ) ( 1 A 00 ( λ ) ) α β A 01 ( λ ) ( 1 A 10 ( λ ) ) ,
λ V 0 ( λ ) = λ H ( λ ) [ G 0 ( λ ) + β λ ( 1 + F 0 ( λ ) + G 0 ( λ ) ) ] = α β A 01 ( λ ) A 00 ( λ ) β ( λ + α A 00 ( λ ) ) A 01 ( λ ) + β ( λ + β A 11 ( λ ) ) β 2 A 01 ( λ ) = β [ λ + β A 11 ( λ ) ( λ + β ) A 01 ( λ ) ] .
Therefore, (34) holds for j = 0 . By a similar argument, (34) also holds for j = 1 . The proof is complete. □
We now consider the probability distribution of C j and the related probabilities P ( C j t , C j , 0 < C j , 1 ) and P ( C j t , C j , 1 < C j , 0 ) . It is easy to see that P ( C j t , C j , k < C j , 1 k ) is differentiable in t for k = 0 , 1 . Let d j , k ( t ) = d d t P ( C j t , C j , k < C j , 1 k ) for k = 0 , 1 . Also, let Δ j , k ( λ ) denote the Laplace transform of d j , k ( t ) for k = 0 , 1 and Δ j ( λ ) denote the Laplace transform of d j ( t ) .
Theorem 4.
For any j 0 , we have
Δ j , 0 ( λ ) = α ( λ + β ) ( 1 λ ϕ j 0 ( λ ) ) α β ( 1 λ ϕ j , 1 ( λ ) ) λ 2 + ( α + β ) λ ,
Δ j , 1 ( λ ) = β ( λ + α ) ( 1 λ ϕ j 1 ( λ ) ) α β ( 1 λ ϕ j , 0 ( λ ) ) λ 2 + ( α + β ) λ
and
Δ j ( λ ) = α ( 1 λ ϕ j 0 ( λ ) ) + β ( 1 λ ϕ j 1 ( λ ) ) λ + α + β ,
where ϕ j , 0 ( λ ) and ϕ j , 1 ( λ ) are given in Theorem 3. In particular,
P ( C j , 0 < C j , 1 ) = α [ 1 + β ( ϕ j , 1 ( 0 ) ϕ j 0 ( 0 ) ) ] α + β ,
P ( C j , 1 < C j , 0 ) = β [ 1 + α ( ϕ j 0 ( 0 ) ϕ j , 1 ( 0 ) ) ] α + β ,
where ϕ j , 0 ( λ ) and ϕ j , 1 ( λ ) are given by ( ) .
Proof. 
By the definitions of { M t : t 0 } and { N t : t 0 } , we know that for any j 0 ,
P ( C j , 0 t , C j , 0 < C j , 1 ) = 0 t d j , 0 ( τ ) d τ = h j , 2 ( t ) , P ( C j , 1 t , C j , 1 < C j , 0 ) = 0 t d j , 1 ( τ ) d τ = h j , 1 ( t )
and
P ( C j t ) = 0 t d j ( τ ) d τ = h j , 2 ( t ) + h j , 1 ( t ) .
Therefore, d j , 0 ( t ) = h j , 2 ( t ) , d j , 1 ( t ) = h j , 1 ( t ) and d j ( t ) = h j , 2 ( t ) + h j , 1 ( t ) . Hence,
Δ j , 0 ( λ ) = λ ϕ j , 2 ( λ ) , Δ j , 1 ( λ ) = λ ϕ j , 1 ( λ )
and
Δ j ( λ ) = λ ϕ j , 2 ( λ ) + λ ϕ j , 1 ( λ ) .
By (10) of Lemma 2, we know that
( λ + α ) λ ϕ j , 2 ( λ ) + α λ ϕ j , 1 ( λ ) = α ( 1 λ ϕ j , 0 ( λ ) )
and
β λ ϕ j , 2 ( λ ) + ( λ + β ) λ ϕ j , 1 ( λ ) = β ( 1 λ ϕ j , 1 ( λ ) ) .
Therefore, by the first two equalities of (10),
Δ j , 0 ( λ ) = λ ϕ j , 2 ( λ ) = α [ ( λ + β ) ( 1 λ ϕ j 0 ( λ ) ) β ( 1 λ ϕ j , 1 ( λ ) ) ] λ 2 + ( α + β ) λ ,
Δ j , 1 ( λ ) = λ ϕ j , 1 ( λ ) = β [ ( λ + α ) ( 1 λ ϕ j 1 ( λ ) ) α ( 1 λ ϕ j , 0 ( λ ) ) ] λ 2 + ( α + β ) λ
and hence
Δ j ( λ ) = α ( 1 λ ϕ j 0 ( λ ) ) + β ( 1 λ ϕ j 1 ( λ ) ) λ + α + β .
Note that P ( C j < ) = Δ j ( 0 ) = 1 , the last two assertions hold. The proof is complete. □
We now consider the mathematical expectation and variance of C j .
Theorem 5.
For any j 0 ,
E [ C j ] = 1 + α ϕ j , 0 ( 0 ) + β ϕ j , 1 ( 0 ) α + β
and
E [ C j 2 ] = 2 [ 1 + α ϕ j , 0 ( 0 ) + β ϕ j , 1 ( 0 ) ( α + β ) ( α ϕ j , 0 ( 0 ) + β ϕ j , 1 ( 0 ) ) ] ( α + β ) 2 ,
where ϕ j , 0 ( λ ) and ϕ j , 1 ( λ ) are given by ( ) .
Proof. 
By Theorem 4, we have
( λ + α + β ) Δ j ( λ ) = α ( 1 λ ϕ j , 0 ( λ ) ) + β ( 1 λ ϕ j , 1 ( λ ) )
Differentiating the above equality yields that
( λ + α + β ) Δ j ( λ ) + Δ j ( λ ) = α ( λ ϕ j , 0 ( λ ) ) β ( λ ϕ j , 1 ( λ ) )
Let λ = 0 and note that Δ j ( 0 ) = 1 , we have
E [ C j ] = Δ j ( 0 ) = 1 + α ϕ j , 0 ( 0 ) + β ϕ j , 1 ( 0 ) α + β .
Differentiating (37) yields that
( λ + α + β ) Δ j ( λ ) + 2 Δ j ( λ ) = α ( λ ϕ j , 0 ( λ ) ) β ( λ ϕ j , 1 ( λ ) ) = α [ λ ϕ j , 0 ( λ ) + 2 ϕ j , 0 ( λ ) ] β [ λ ϕ j , 1 ( λ ) + 2 ϕ j , 1 ( λ ) ] .
Let λ = 0 in the above equality yields that
( α + β ) Δ j ( 0 ) + 2 Δ j ( 0 ) = 2 α ϕ j , 0 ( 0 ) 2 β ϕ j , 1 ( 0 ) .
Therefore,
E [ C j 2 ] = Δ j ( 0 ) = 2 ( Δ j ( 0 ) α ϕ j , 0 ( 0 ) β ϕ j , 1 ( 0 ) ) α + β = 2 [ 1 + α ϕ j , 0 ( 0 ) + β ϕ j , 1 ( 0 ) ( α + β ) ( α ϕ j , 0 ( 0 ) + β ϕ j , 1 ( 0 ) ) ] ( α + β ) 2 .
The proof is complete. □
Finally, if α = 0 or β = 0 , we get the following result which is due to Di Crescenzo et al [10].
Corollary 1. (i) If β = 0 , then for any j 0 ,
E [ C j ] = 1 α + π ^ j , 0 ( α ) 1 α π ^ 0 , 0 ( α )
and
E [ C j 2 ] = 2 α 2 1 + α π ^ j , 0 ( α ) 1 α π ^ 0 , 0 ( α ) α 2 π ^ j , 0 ( α ) 1 α π ^ 0 , 0 ( α ) α 3 π ^ j , 0 ( α ) π ^ 0 , 0 ( α ) ( 1 α π ^ 0 , 0 ( α ) ) 2 .
(ii) If α = 0 , then for any j 0 ,
E [ C j ] = 1 β + π ^ j , 1 ( β ) 1 β π ^ 1 , 1 ( β )
and
E [ C j 2 ] = 2 β 2 1 + β π ^ j , 1 ( β ) 1 β π ^ 1 , 1 ( β ) β 2 π ^ j , 1 ( β ) 1 β π ^ 1 , 1 ( β ) β 3 π ^ j , 1 ( β ) π ^ 1 , 1 ( β ) ( 1 β π ^ 1 , 1 ( β ) ) 2 .
Proof. 
If β = 0 , by Theorem 3,
ϕ j , 0 ( λ ) = π ^ j , 0 ( λ + α ) + α π ^ j , 0 ( λ + α ) π ^ 0 , 0 ( λ + α ) 1 α π ^ 0 , 0 ( λ + α ) = π ^ j , 0 ( λ + α ) 1 α π ^ 0 , 0 ( λ + α ) .
Therefore,
ϕ j , 0 ( 0 ) = π ^ j , 0 ( α ) 1 α π ^ 0 , 0 ( α )
and
ϕ j , 0 ( 0 ) = π ^ j , 0 ( α ) 1 α π ^ 0 , 0 ( α ) + α π ^ j , 0 ( α ) π ^ 0 , 0 ( α ) ( 1 α π ^ 0 , 0 ( α ) ) 2 .
Hence, by Theorem 4,
E [ C j ] = 1 + α ϕ j , 0 ( 0 ) α = 1 α + π ^ j , 0 ( α ) 1 α π ^ 0 , 0 ( α )
and
E [ C j 2 ] = 2 α 2 [ 1 + α ϕ j , 0 ( 0 ) α 2 ϕ j , 0 ( 0 ) ] = 2 α 2 1 + α π ^ j , 0 ( α ) 1 α π ^ 0 , 0 ( α ) α 2 π ^ j , 0 ( α ) 1 α π ^ 0 , 0 ( α ) α 3 π ^ j , 0 ( α ) π ^ 0 , 0 ( α ) ( 1 α π ^ 0 , 0 ( α ) ) 2 .
(i) is proved. The proof of (ii) is similar. □

Acknowledgments

This work is supported by the National Natural Science Foundation of China (No. 11771452, No. 11971486).

References

  1. Anderson, W. Continuous-Time Markov Chains: An Applications-Oriented Approach. New York: Springer-Verlag, 1991.
  2. Artalejo, J. G-networks: a versatile approach for work removal in queueing networks. Eur. J. Oper. Res. 2000, 126: 233-249.
  3. Asmussen, S. Applied Probability and Queues, 2nd edn. New York: John Wiley, 2003.
  4. Bayer, N. and Boxma O.J. Wiener-Hopf analysis of an M/G/1 queue with negative customers and of a related class of random walks. Queueing Systems, 1996, 23: 301-316.
  5. Chen, A.Y. , Zhang H.J., Liu K. Birth-death processes with disasters and instantaneous resurrection. Adv. in Appl. Probab. 2004, 36: 267-C292.
  6. Chen, A.Y. , Pollett P., Li J.P. and Zhang H.J. Markovian bulk-arrival and bulk-service queues with state-dependent control. Queueing Systems, 2: 64.
  7. Chen, A.Y. , Renshaw E. The M/M/1 queue with mass exodus and mass arrives when empty. J. Appl. Probab. 1997, 34: 192-C207.
  8. Chen, A.Y. , Renshaw E. Markov bulk-arriving queues with state-dependent control at idle time. Adv. Appl. Probab. 2004, 36: 499-524.
  9. Chen, M.F. Eigenvalues, Inequalities, and Ergodic Theory. London: Springer-Verlag, 2004.
  10. Di Crescenzo, A. , Giorno V., Nobile A.G. and Ricciardi L.M. A note on birth-death processes with catastrophes. Statistics and Probability Letters, 2008, 78: 2248-2257.
  11. Dudin, A. and Karolik A. BMAP/SM/1 queue with Markovian input of disasters and non-instantaneous recovery. Performance Evaluation, 2001, 45: 19-32.
  12. Economou, A. , Fakinos D. A continuous-time Markov chain under the influence of a regulating point process and applications in stochastic models with catastrophes. European J. Oper. Res. 62003, 149: 625-C640.
  13. Gelenbe, E. Product-form queueing networks with negative and positive customers. J. Appl. Probab. 1991,28: 656-663.
  14. Gelenbe E, Glynn P. and Sigman K. Queues with negative arrivals. J. Appl. Probab. 1991, 28: 245-250.
  15. Jain, G. and Sigman K. A Pollaczek-Khintchine formula for M/G/1 queues with disasters. J. Appl. Probab. 1996, 33: 1191-1200.
  16. Li, J.P. , Zhang L.N. Controlled Markovian bulk-arrival and bulk-service queues with catastrophes. Scientia Sinica(Mathematica), 2015, 45(5): 527-538.
  17. Li, J.P. , Zhang L.N. MX/M/c Queue with catastrophes and state-dependent control at idle time. Frontiers of Mathematics in China, 2017, 12(6): 1427-1439.
  18. Pakes, A.G. Killing and resurrection of Markov processes. Commun. Statist. Stochastic Models, 1997, 13: 255-269.
  19. Zeifman, A. and Korotysheva A. Perturbation Bounds for Mt/Mt/N Queue with Catastrophes. Stochastic Models, 2012, 28: 49-62.
  20. Zhang, L.N.; Li, J.P. The M/M/c queue with mass exodus and mass arrivals when empty. Journal of Applied Probability, 2015, 52(4): 990-1002.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated