1. Introduction
The goal of this paper is to establish positive recurrence of the model under certain assumptions. The Foster condition will be used for this. Intensity of service is assumed only partially at zero (as a lower left derivative value at zero of the “integrated intensity”); in addition, an integral type condition on the “integrated intensity” over intervals of some length is assumed.
For the recent history of the topic see [
1] – [
7], [
14] and many more; see also [
15,
18]. One of the reasons – although not the only one – why various versions of this system are so popular may be explained by the fact of their intrinsic links to important topics of mathematical insurance theory, see [
3].
In this paper we return to the less involved single–server system
where the intensity of arrivals may only depend on the number of customers in the system, with the goal of reviewing conditions of its positive recurrence. An importance of this property may be commented, for example, by the publications [
1,
12] where the investigation of the model
assumes that it is in the “steady state”, which is the synonym of stationarity. As is well-known, positive recurrence along with some mild mixing or coupling properties guarantees the existence of a stationary regime of the system. One particular aspect of this issue is how to achieve bounds without assuming existence of intensity of service in
model and, more generally, in Erlang – Sevastyanov’s type systems. Certain results in this direction were recently established in [
19] for a slightly different model. Still, in [
19] it is essential that the absolute continuous part of the distribution function
F (in our notation) were non-degenerate; in the present paper this is not required and the approach is different.
Note that in such a model certain close results may be obtained by the methods of regenerative processes if it is assumed that the same distribution function
F (see below) has enough number of moments. However, our conditions and methods are different. The main (moderate) hope of the author is that possibly this approach may also be useful in studying ergodic properties of Erlang – Sevastyanov type models, as it happened with the earlier results and approaches based on the intensity of service as in [
15,
18] successfully applied in [
16]. The present paper is some initial attempt toward the programme of developing tools that could help attack the problem outlined recently in [
17].
The paper consists of the introduction in
Section 1, of the setting and main result in
Section 2, of two simple auxiliary lemmata in
Section 3 and of the proof of the main result in
Section 4, and two simple examples in
Section 5 for the comparison of sufficient conditions of theorem 1 with conditions in terms of the intensity of service in the case if the latter does exist.
4. Proof of Theorem 1
0. The proof will be split into several steps. We shall consider the embedded Markov chain, namely, the process at times , and it will be shown that this process hits some suitable compact around “zero state” in time which admits a finite expectation. From this property the main result will follow. The reader is warned that after this first hit the definition of the embedded Markov chain will change, as further times may become random and possibly non-integer, see step 4 of this proof.
The main goal is to establish the bound (
7), from which the estimate (
8) follows as a corollary.
1. Let us choose
so that
(see condition (
4)). NB: We highlight that this value
will be fixed for what follows and will not tend to zero.
Once
is chosen, let
(see (
2) for the definition of
) and let us choose
such that
Since
as
, this is possible for any
. Let us introduce an auxiliary stopping time
Denote
Function
L will serve as a Lyapunov function outside the compact set
.
First of all, we are going to estimate the first moment of
, namely, to prove that there exists
such that
Recall that
and highlight that the definition of
is quite different from that of
.
Let
. We have for
,
where
is a local martingale (see, e.g., [
13]), and
It is assumed that
by definition. The martingale
is, in fact, a “normal”, that is, non-local one because due to the assumptions expectations of all terms in the integral version of (
16) are finite. The following bound will be established:
with some
, if
.
In order to evaluate
let us introduce a sequence of stopping times. Let
To evaluate
, using the identity
which holds provided that
, let us estimate,
Let us estimate the term
. We have (recall that
by definition),
Here the complementary part of the integral
was just dropped.
Further, it will be shown that
For this aim, let us introduce by induction the sequence of stopping times,
Note that the component
might only have finitely many jumps down on any finite interval. The times of jumps on the interval
are exactly the times
, and possibly the last jump down on this interval may or may not occur at
. In any case, clearly,
So, we have,
where for each outcome this series is almost surely a finite sum. On each interval
we may write down
This is by virtue of assumption (
5) and because at each stopping time
which is less than 1 we have
. If
, then both sides in the latter inequality equal zero, so that the inequality still holds true. Therefore, taking a sum over
n, we obtain (
21), just without
in the left hand side. This miltiplier
in the right hand side guarantees that its presence in the left hand side of (
21) still leads to a valid inequality, which means that the bound (
21) is justified.
It follows from (
20) and (
21) that
Due to lemma 1, if
then (see (
12))
(NB: In fact, at least
n jobs should be completed, the first one on
; however, we prefer to have a bound independent of
x. In any case, this does not change the scheme of the proof.) Likewise, if
, then
due to the choise of
, see (
13).
Recall that
was chosen so that
, see (
11). Hence,
Denote
Thus, for any
we obtain,
The bound (
17) follows with a constant
C which may be evaluated via
.
2. So, with the choice of
and
according to (
12) and (
13), respectively, we may write,
The event
may be equivalently expressed as
. Hence, the latter inequality may also be rewritten in the form suitable for the induction:
Similarly,
may be equivalently expressed as
. So, we get
Hence, by taking expectations we obtain,
Similarly, by induction (in what follows the notation
is used),
Due to the elementary bound
, this implies,
Summing up and dropping the negative term in the right hand side, we obtain
for any
. So, by the monotone convergence theorem,
By virtue of the well-known relation
for the expectation of
the bound (
23) implies the following inequality with
,
In particular, this bound signifies that
3. Now, once the bound for the expected value of
is established, we are ready to explain the details how to get a bound for
. The rest of the proof is devoted to this implication, with the last sentences related to the corollary about the invariant measure and convergence to it.
At
, the process
attains the set
, while
; hence, both
By definition, random variable
is the first integer
k where simultaneously
Therefore, at
we have either
, or
, or both. If there are no completed jobs on
, then
may only increase, or, at least, stay equal on this interval, while
certainly increases. Therefore,
may only be achieved by at least one completed job; it would mean a jump down by one of the
n-component and simultaneously a jump down to zero of the
x-component. Then at
k we obtain
, which certainly makes it less than
, irrespectively of whether or not there were other arrivals or completed jobs on
(recall that in addition to inequality (
13) we assumed that
).
Now, given
and
, by virtue of lemma 2, for any
we have with any
,
where
Here
T is any positive integer and
Note that, of course, for non-integer values of
there is a likewise bound, but it looks a bit more involved, and using integers
T values suffices for the proof. Recall that it was assumed in (
2) that
, and it follows from the first line of (
2) that
. Inequality (
25) implies that
NB: Here the standard notations for homogeneous Markov processes are used, which means that after stopping time τ is, actually, the value .
Note that for any
the event
implies that
Hence, we conclude,
and, therefore, for any
x,
4. Consider now the process
X started at time
from state
with
and
.
Let
and let us stop the process either at
, or at
whatever happens earlier. In other words, consider the stopping time
The event
implies that the process
does not exceed the level
on the interval
. On the other hand, according to the arguments of step 3 we have,
Let
and further, let us define two sequences of stopping times by induction,
Note that all integers here in expressions like
and
stand for upper indices, not for power functions. Let us highlight that stopping time
equals
plus some integer, but
may or may be not integer itself. All these stopping times are finite with probability one, and, moreover,
Also, almost surely
NB: If , then, in general, we may not claim that , but it is not necessary for our aims.
Using strong Markov property at time
(see [
9]), we obtain by induction,
5. Denote
Also by induction it follows from (
24) and from the elementary bound by definition
that there exist a constant
C such that
Using the representation
and due to (
30), we estimate,
Now we are going to estimate this sum by some geometric type series in a combination with bounds (
31) and (
32). Some little issue is that we are not able to use Hölder’s, or Cauchy – Buniakowskii – Schwarz’ inequality here because we only possess a first moment bound for
, while higher moments are not available. This minor obstacle is resolved in the next step of the proof by the following arguments using conditional expectation with respect to suitable sigma-algebras.
6. We have,
Let us investigate the last term. Since each random variable
is
-measurable for any
, and because, due to the strong Markov property and the bound (
31),
Moreover, by virtue of the inequality (
15) and since by definition all
, we have for
,
with some finite constant
C.
Let us inspect the previous term. Using that
and all random variables
are
-measurable for any
, we obtain by induction,
Also by induction we find that a similar upper bound with the multiplier
holds for each term in the sum in the right hand side of (
33), for
, and
. Indeed, using the identity
we have for
,
It was used that the random variable
is
-measurable.
Further, by virtue of the bound (
31)
We used the bound (
35) with
replaced by its upper bound
T (as in the calculus leading to (
35)) and with
k replaced by
i.
The first term is estimated similarly with the only change that instead of the constant
C we obtain a multiplier
which is a function of the initial data
and which makes the resulting bound non-uniform with respect to the initial data:
Overall, collecting the bounds (
35) – (
37), we get,
Therefore, it follows that
with some new constant
C, as required.
Existence of the invariant measure for the model and the inequality (
8) follow straightforwardly from the established positive recurrence (
7) and from the coupling technique in a usual way, see, for example, [
18]. Theorem 1 is proved. □