Preprint
Article

Theorem on the Structure of the Fractionally Linear Functional Extremal Function

Altmetrics

Downloads

83

Views

14

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

13 June 2023

Posted:

14 June 2023

You are already at the latest version

Alerts
Abstract
The paper proves a theorem about the structure of the distribution function on which the extremum of fractionally linear functional is reached in the presence of an uncountable number of linear constraints. The problem of finding an extremal distribution function arises when determining the optimal control strategy in a class of Markov homogeneous randomized control strategies. The structure of extremal functions is described by a finite number of parameters, hence the problem is greatly simplified since it is reduced to the search for an extremum of some function.
Keywords: 
Subject: Computer Science and Mathematics  -   Applied Mathematics

1. Introduction. Statement of the problem

Currently, there are many practical models of reliability, mass maintenance, and security that are adequately described by a semi-Markov controlled process (MCP) with a finite set of states.
In these models, control consists in choosing the moments of intervention in the system operation (for example, preventive repair, replacement of failed or worn out units, frequency of determination of the current state of the system). If the studied models consider a class of randomized control strategies, then we come to the problem of functional analysis: the search for an extremum of some functional characterizing the quality of control on the set of probability measures (distributions) defining the randomized strategy. The result stated in the paper allows to simplify the problem (if probabilistic measures are defined on a straight line or a semi-straight line) by reducing it to the problem of mathematical analysis (the problem of finding an extremum of some function over a set of parameters), which, in principle, allows to use software tools and to bring the result to a number.
I G = u U A u G d u u U B u G d u
on some admissible set of probability measures Ω (the set of Markov homogeneous randomized control strategies) and the definition of the probability measure on which this extremum is reached [1].
For controllable models of mass service, reliability and safety, when it comes to the optimal choice of service duration or periodicity of restoration work, the control space coincides with the set of positive numbers U = R + = 0 , , and the probability measure is given by distribution functions g = G u = P ζ < u ,   G 0 = 0 , where ζ is the decision to be made.
Let us introduce two distribution functions
g = G 1 u , g = G 2 u , 0 G 1 u G 2 u 1 ,   u 0 , G 1 ( 0 ) = G 2 ( 0 ) = 0 ,   u 0 ,
and define the set of admissible distributions
Ω = G : G 1 ( u ) G ( u ) G 2 ( u ) , u 0 ,
Now in the accepted notations we can formulate the main mathematical problem: to determine the maximum of fractionally linear functional (1) over the set of admissible distributions (2) and the strategy (distribution) G * on which this maximum is reached,
m a x G Ω I G = I G   * .
Note that the constraints (2) can be represented as constraints of the type of inequalities on linear functionals
G 1 u 0 + φ x , u d G x G 2 u , φ x , u = 1 , x < u , 0 , x u , u 0 , + .
Consequently, the formulated problem is a fractionally linear programming problem with an uncountable number of linear constraints.
If we use the lemma [2] on the coincidence of the sets of probability measures on which the extremum of a fractionally linear functional and the extremum of a specially chosen linear functional are reached, the problem can be simplified by reducing it to the linear case.
Here is the formulation of this lemma.
Lemma 1.
If there exists a maximum of a fractionally linear functional I ( G ) on some set Ω
m a x G Ω I ( G ) = C ,
then the equality of the sets
G : I ( G ) = C = G : u U С u G d u = 0 ,
where U is the set of possible controls, and the integrand function С u is defined by equality С u = А u С B u , and for the linear functional u U С u G d u the following relation is correct
m a x G Ω u U С u G d u = 0 .
Thus, if it is found that for some probability measure, a certain structure, the extremum of the linear functional is reached and all probability measures, of the given structure, belong to the set Ω   then the extremum of the fractionally linear functional can be sought over a narrower set. Moreover, if each probabilistic measure of a selected structure is uniquely defined by parameters (for example, measures defined on a discrete set), then the problem of functional analysis is reduced to the problem of finding an extremum of a function in the parameter space.
Here it should be noted that the integrand function of a linear functional depends on an unknown parameter, and this creates certain difficulties when studying the structure of extreme probability measures.

2. Theorem on the structure of the extremal function

Let us formulate the requirements to the integrand function C u and call these requirements conditions (*):
  • the function is defined on the half line [ 0 , + ] ;
  • function is piecewise continuous;
  • has a finite number of discontinuity points;
  • has only discontinuities of the first kind;
  • at the discontinuity points the function takes a larger value;
  • function has a finite number of local maxima.
Let us denote the values of maxima
α i , i = 1,2 , . . . , n , α 1 > α 2 > . . . > α n . .
Let us define a maximum ordering rule for any segment of the set [ 0 , + ] . Each maxima is characterized by the pair α , u - value of the maxima and its argument.
Definition. When comparing the two maxima α , u 1 and β , u 2 :
  • maximum α , u 1   is assigned a lower number under the condition that α > β ;
  • maximum α , u 1 is assigned a lower number under the condition that α = β and u 2 > u 1 .
Let us introduce the notations that we will use hereafter and for any subset of the set of definitions of the integrand function:
3.
α i , i = 1,2 , . . . , n , values of maxima and n - is the number of different values of maxima, + > α 1 > α 2 > . . . > α n > ;
4.
u i j , i = 1,2 , . . . , n , j = 1,2 , . . . , k i is the argument of the maximum, for which the conditions are satisfied
C u i j = α i ,
the second index is determined by the equations
u i 1 = min u : C u = α i , u i k i = max u : C u = α i ,
and for the other arguments the order determines the inequality u i j < u i j + 1 , j = 1,2 , 3 , . . . , k i 1 that is, it sets the order of maxima of the fixed level (4).
Note that this process of maximum ordering also involves the outermost points of the segment, if the maxima are reached in them.
Let us introduce the notation u = G i 1 g , i = 1,2 for the function inverse of the function g = G i ( u ) . Given the properties of distribution functions (the presence of discontinuities), in the areas of uncertainty we will consider the inverse function constant, defining it so that it is nonincreasing. If the distribution function has areas in which it does not increase, then the inverse function is multivalued. In this case, we choose one of the possible values and consider the inverse function to be discontinuous.
Let us introduce notations for some areas of the independent variable.
Denote
U α , β = u : α < С u β , < α β < +
U α = U , α = u : < С u α .
The sets U α i , α i 1 , i = 1,2 , . . . , n , α 0 = do not intersect and for these sets the relations are valid
U α n U α 2 U α 1 = 0 , + ,
U α i 1 \ U α i = U α i , α i 1 .
Now let us formulate a basic theorem.
L G = 0 С u d G u
Theorem 1.
If there exists a maximum of a linear functional
Preprints 76495 i001
over the set of distribution functions Ω = G : G 1 ( u ) G ( u ) G 2 ( u ) , u 0 and the integrand function satisfies conditions (*), then there exists a distribution function of the following structure among the distribution functions on which the maximum is achieved:
- or the function coincides with one of the boundaries;
- or the function moves from boundary to boundary;
- or a piecewise constant (stepped) function.
Proof of Theorem 1.
The proof of the theorem is carried out step by step: ordering of maxima, construction of reference functions, investigation of the properties of the reference functions and the proof of the theorem statement.
The ordering of the maxima is described above, so we will use the previously introduced notations.
Next, we describe the algorithm for constructing reference functions, determine their areas of values and definitions, and examine their structure and properties.
Note that the formulation of the theorem defines distribution functions that have structures of three kinds, which we will hereafter refer to as reference functions.
For each isolated global maximum of the integrand function α 1 , u 1 j , 1 j < k 1 we define a reference function in the area u 0 , +
G 1 j * u = G 1 u 1 j , u 0 , u 1 j min G 2 u 1 j , G 1 u 1 j + 1 , u u 1 j , + . .
Then we define the area of influence of each maximum by the ratio
U 1 , j = u 1 j 1 , u 1 j , е с л и G 2 1 G 1 u 1 j u 1 j 1 , u 1 j , u 1 j , е с л и G 2 1 G 1 u 1 j > u 1 j 1 , u 10 = 0 ,
and in the area of
U 1 = 1 j k 1 U 1 , j
let us construct a non-decreasing, stepped, continuous function on the left
G * u = j = 1 k 1 G j * u G j * 0 ,
satisfying the conditions
G 1 j * u 1 j = G 1 u 1 j , l i m u u 1 j + 0 G 1 j * u = min G 2 u 1 j , G 1 u 1 j + 1 ` , j = 1,2 , . . . , k 1 .
It is easy to see that function (7) coincides with the reference functions in the area (6).
Recall that if the global maximum of the α 1 of the integrand function С u is reached on some segment u 1 , u 2   then the maxima at the boundary points take part in the process of the maxima ordering. If for these maxima the inequality is satisfied
u 1 < G 2 1 G 1 u 2 ,
then add to the set U 1 segment u 1 , G 2 1 G 1 u 2 and define the reference function on this set as a stepped function having a finite number of jumps and belonging to the set of admissible functions.
Let us prove one important inequality, which is true for the reference function. To this end, we introduce the function
A ~ u = α 1 , u U 1 , A u , u U α 1 \ U 1
For any admissible distribution G u Ω by virtue of the properties of the reference function G * u and the major function A ~ u the following inequalities are true
U 1 A u d G u U 1 A ~ u d G u U 1 A ~ u d G * u = U 1 A u d G * u
In addition to this property of the reference functions, we note another interesting inequality for the reference function and for any function G Ω . Let us denote by γ 1 j = G 1 j * 0 , γ 2 j = G 1 j * + and define the function
G 1 j u = γ 1 j , u 0 , G   1 γ 1 j , G u , u G   1 γ 1 j , G   1 γ 2 j , γ 2 j , u G   1 γ 2 j , + .
From the definition of the reference functions and functions G 1 j u it follows the fulfillment of the inequalities
L G 1 j * L G 1 j .
If U 1 = U α 1 , the theorem is proved, since
L G = j = 1 k 1 L G 1 j j = 1 k 1 L G 1 j * = L G 1 * .
and among the optimal distributions there is a stepped function
G 1 * u = j = 1 k 1 G 1 j * u G 1 j * 0
with jumps at the points of maxima.
If U 1 U α 1 then U 1 = U α 1 \ U 1 and set the problem to determine the reference function in this area. Note that this area is a finite number of intervals, in each of which, at least at one boundary point, the integrand function equals α 1 . Let us describe the process of determining the reference functions of one of them by denoting this interval u 1 , u 2 . Then two variants are possible:
There are no maxima within the interval;
Within the interval there are maxima and a global maximum of the level s , 2 s n   i.e. there are interior points u s j , 1 l 1 j l 2 k s   for which the equality is true C u s j = α s .
In the first case, the reference function in the area u 1 , u 2 we define by equality
G 10 * u = G 2 u 1 , u 0 , u 1 , G 2 u , u u 1 , G 2 1 γ   * , γ   * , u G 2 1 γ   * , G 1 1 γ   * , G 1 u , u G 1 1 γ   * , u 2 , G 1 u 2 , u u 2 , + .
where γ   * G 2 u 1 , G 1 u 2 and for it the equality is done
m a x γ G 2 u 1 G 1 u 2 u u 1 , G 2 1 γ C u d G 2 u + u G 1 1 γ , u 2 C u d G 1 u = = u u 1 , G 2 1 γ   * C u d G 2 u + u G 1 1 γ   * , u 2 C u d G 1 u = u u 1 , u 2 C u d G 10 * u
If there are no maxima inside the area u 1 , u 2 , then there exists a point u 0 u 1 , u 2   for which G 1 u 0 G 2 u 0 and in the area u 1 , u 0   the integrand function does not increase, and in the area u 0 , u 2 the integrand function does not decrease. Let us introduce two functions
G 10 u = γ 1 , u 0 , G   1 γ 1 , G u , u G   1 γ 1 , G   1 γ 2 , γ 2 , u G   1 γ 2 , + ,
Ψ 10 * u = γ 1 , u 0 , u 1 , G 2 u , u u 1 , G 2 1 G u   0 , G u   0 , u G 2 1 G u   0 , G 1 1 G u   0 , G 1 u , u G 1 1 G u   0 , u 2 , γ 2 u u 2 , + .
From the definition of these functions at G Ω these inequalities follow
Ψ 10 * u G j 0 u , u 0 , u 0 ,
Ψ 10 * u G j 0 u , u u 0 , + . If we consider the properties of the integrand function, it is easy to obtain an estimate by integrating over the parts
0 + C u d Ψ 10 u G 10 u 0
From this evaluation and the choice of the parameter γ   * follows
G   1 γ 1 G   1 γ 2 C u d G u = 0 + C u d G 10 u 0 + C u d Ψ 10 u 0 + C u d G 10 * u
This completes the construction of the reference functions for the interval in question.
Next, let us consider the case where there are maxima within the interval and the global maximum of the level s , 2 s n   i.e. there are interior points u s j , 1 l 1 j l 2 k s   for which the equality C u s j = α s   is true. Formally, it is necessary to reorder the maxima for the new area. We will leave the previous notations in order to avoid unnecessary cumbersomeness.
Let's define the areas u 1 , u s 1 , u s k s , u 2 .
In the first area, the integrand function does not increase and the inequalities are satisfied
α 1 C u α s ,
and in the second area the integrand function does not decrease and the same inequalities (15) are satisfied for it.
Note that if one of the areas does not fulfill the inequalities, it is not further involved in the consideration.
If γ 1 = G 2 u s 1 G 1 u s k s = γ 2 , then we define the reference function in γ 2 , γ 1 by the equality (12), where the parameter γ   * is defined by the relation
m a x γ γ 2 , γ 1 u u 1 , G 2 1 γ C u d G 2 u + u G 1 1 γ , u 2 C u d G 1 u = = u u 1 , G 2 1 γ   * C u d G 2 u + u G 1 1 γ   * , u 2 C u d G 1 u = u u 1 , u 2 C u d G 10 * u
Next, prove the inequality
G   1 γ 1 G   1 γ 2 C u d G u 0 + C u d G 10 * u .
for any admissible distribution. The proof of this inequality is reduced to the introduction of a major function
C ~ u = α s , u u s 1 , u s k s , C u , u u s 1 , u s k s
and the definition of functions G 10 u , Ψ 10 * u , only as a parameter u 0 can be any point of the area u s 1 , u s k s .
Finally, if γ 1 = G 2 u s 1 < G 1 u s k s = γ 2 , then we define two reference functions
G 10 1 * u = γ 1 , u 0 , u   1 , G 2 u , u u   1 , u s 1 , γ   1 , u u s 1 , + ,
G 10 2 * u = γ   2 , u 0 , u s k s , G 1 u , u u s k s , u   2 , γ 2 , u u   2 , + ,
and two functions associated with one admissible distribution function G Ω ,
G 10 1 u = γ 1 , u 0 , G   1 γ 1 , G u , u G   1 γ 1 , G   1 γ   1 , γ   1 , u G   1 γ   1 , + , G 10 2 u = γ   2 , u 0 , G   1 γ   2 , G u , u G   1 γ   2 , G   1 γ 2 , γ 2 , u G   1 γ 2 , + .
Then two inequalities can be easily proved by integration over parts
G   1 γ 1 G   1 γ   1 C u d G u = 0 + C u d G 10 1 u 0 + C u d G 10 1 * u
G   1 γ   2 G   1 γ 2 C u d G u = 0 + C u d G 10 2 u 0 + C u d G 10 2 * u .
It is easy to see that the reference functions in the area   u u s 1 , u s k s   are not defined. This procedure repeats the above, only for a new narrower area and a smaller maximum.
The procedure for constructing the reference functions will end due to the finiteness of the number of maxima of the integrand function.
So, a finite set of reference functions is constructed, which fields of values do not overlap and which sum coincides with 0,1 . Denote by M the set of reference functions and define the function
G 0 u = G   * М G   * u G   * 0 .
Since inequalities (11), (14), (16), (19), (20) for the reference functions are satisfied for any function G Ω we have the inequality
L G L G 0
which proves the statement of the theorem. □

3. Discussion

Let us formulate useful corollaries of the proved theorem.
Given the statement of the above lemma, we give a formulation for the fractionally linear functional (1), assuming the existence of a maximum of this functional m a x G Ω I G = I G   * = С and fulfillment of the conditions of the theorem on the integrand functions.
Source 1
[4,5]. If Ω = G : G u i = π i , 0 i n , 0 = u 0 < u 1 < . . . < u n < u n + 1 = + , 0 = π 9 π 1 . . . π n π n + 1 i , then
m a x G Ω I G = m a x x k u k , u k + 1 , 1 k n k = 0 n A x k π k + 1 π k k = 0 n B x k π k + 1 π k
Source 2
[6]. If the function С u = А u С B u has one maximum at 0 , + , then
m a x G Ω I G = m a x x 0 0 x A u d G 1 u + A x 1 G 2 x G 1 x + x + A u d G 2 u 0 x B u d G 1 u + B x 1 G 2 x G 1 x + x + B u d G 2 u
Source 3.
If the function С u = А u С B u has no maxima in 0 , + , then
m a x G Ω I G = m a x γ 0,1 u 0 , G 2 1 γ А u d G 2 u + u G 1 1 γ , + А u d G 1 u u 0 , G 2 1 γ В u d G 2 u + u G 1 1 γ , + В u d G 1 u
The following example of a controlled mass service model in the "classical" statement, when the constraints are formulated in the form of standard inequalities on probability measures Ω = G : 0 G B 1 are well known [3], so we will omit the detailed conclusions of the basic relations.
Example 1. A mass service system.
We will study a mass service system, the input of which is a Poisson flow of demands with the parameter λ   comes. The number of service channels is one, the number of waiting places is K , 1 K < . In Kendall's terminology the system M | G | 1 | K .
In contrast to the classical formulation, we will change the distribution of the service time depending on the number of demands in the system at the moments of the end of service of the next demand and at the moments of arrival of demands to the free system. If at the given moments the number of demands in the system is equal to i, then we choose the service time distribution of the next demand equal to G ( i , t ) = P ξ i < t where ξ i - is random service time. Assigning a random service duration means introducing randomization into the decision making process (a Markov moment), i.e. at the moment when a decision has to be made and state i is observed, the realization of τ of a random variable ξ i = τ distributed according to the law G i , t = P ξ i < t   and the service duration is assigned equal to the value τ .
Let's introduce the cost characteristics, which determine the functional that characterizes the quality of functioning and control.
Let:
c 0 - fee (income) per one serviced claim;
c 1 - payment for one hour of channel operation;
c 2 - payment for one hour of free channel idle time,
c 3 - payment for the loss of one claim,
c 4 - payment for one hour of one requirement in the queue.
In [3] there is an expression of fractionally linear functional (1) - the specific mathematical expectation of accumulated income for an arbitrary number of places for expectation K , 1 K < . We consider the solution of the above optimization problem under K = 1,2 and the constraints
0 G 1 ( i , i , t ) G ( i , t ) G 2 ( i , t ) 1 ,
where G 1 ( i , i , t ) , G 2 ( i , i , t ) , G 11 ( t ) G 21 ( t ) are the distribution functions belonging to the set Ω .
With one waiting place K = 1 at Markov moments (moments of end of service and moments of arrival of demands to the free system) there are either zero demands or one demand in the system.
Hence, a semi-Markov process has two states, but decisions are made only in one of them when there is one requirement in the system and the functional (1) depends on one distribution function G ( 1 , t ) = G ( t ) .
In [3] the dependence of this functional on the initial characteristics is given
I G = 0 + λ c 0 + c 1 t + c 2 e λ t + λ t + e λ t 1 c 3 λ + c 4 d G t 0 + λ t + e λ t d G t .
Next, we will use the notation
A ( t ) = λ c 0 + c 1 t + c 2 e λ t + λ t + e λ t 1 c 3 λ + c 4 , B ( t ) = λ t + e λ t .
Next, let us set the optimization problem: let there be two distribution functions
G 1 ( 1 , t ) , G 2 ( 1 , t ) , G 1 ( 1 , t ) G 2 ( 1 , t ) , G i ( 1,0 ) = 0 , i = 1,2
and we need to find the maximum of the functional (25) over the set of distributions
Ω = 0 G 1 ( 1 , t ) G ( t ) G 2 ( 1 , t ) 1 .
By the conditions of the theorem and lemma proved above, it is necessary to investigate the function
C ( t ) = A ( t ) C B ( t ) = λ c 0 + c 1 t + c 2 e λ t + λ t + e λ t 1 c 3 λ + c 4 C λ t + e λ t ,
where m a x G Ω I G = C .
Let us investigate the function C ( t ) and prove that it satisfies the conditions of the above theorem and has one maximum.
Suppose that the inequality C > 0 is true for optimal demand service since the initial data should be such that, under optimal control, the operation of the mass service system should produce a positive effect.
It is important to pay attention to the sign of the initial constants. The coefficients c i , i = 1,2 , 3,4 are negative, since they are losses from the functioning of the system; the parameter c 0 is greater than zero, because it is a profit.
The function under study (27) and all its derivatives are continuous.
The elementary ratios below
C ( 0 ) = λ c 0 C 0 , C ( t ) , t + , d C ( t ) d t = λ c 1 + c 3 λ + c 4 C c 3 λ + c 4 + c 2 C e λ t , d C ( 0 ) d t = λ c 1 c 2 , d 2 C ( t ) d t 2 = c 2 + c 3 λ + c 4 C λ 2 e λ t 0 ,
prove that this function has a maximum at some point 0 t 0 < + and in the area 0 , t 0 it increases, and in the area t 0 , + it decreases. Thus, it is proved that the function C ( t ) has a maximum at some finite point. Therefore the maximum of the functional (25) is reached on the function
G   * ( t ) = G 1 ( 1 , t ) ,   0 t τ 0 , G 2 ( 1 , t ) ,   τ 0 < t + ,
the parameter τ 0 is defined as the maximum point of the function
m a x G 1 ( x ) G ( x ) G 2 ( x ) I ( G ) = m a x 0 τ + 0 τ A ( t ) d G 1 ( 1 , t ) + A ( τ ) G 2 ( 1 , τ ) G 1 ( 1 , τ ) + τ + A ( t ) d G 2 ( 1 , t ) 0 τ B ( t ) d G 1 ( 1 , t ) + B ( τ ) G 2 ( 1 , τ ) G 1 ( 1 , τ ) + τ + B ( t ) d G 2 ( 1 , t ) = = 0 τ 0 A ( t ) d G 1 ( 1 , t ) + A ( τ 0 ) G 2 ( 1 , τ 0 ) G 1 ( 1 , τ 0 ) + τ 0 + A ( t ) d G 2 ( 1 , t ) 0 τ 0 B ( t ) d G 1 ( 1 , t ) + B ( τ 0 ) G 2 ( 1 , τ 0 ) G 1 ( 1 , τ 0 ) + τ 0 + B ( t ) d G 2 ( 1 , t ) ,
and the functions A ( t ) and B ( t ) are defined by the equations
A t = λ c 0 + c 1 t + c 2 e λ t + λ t + e λ t 1 c 3 λ + c 4 , B ( t ) = λ t + e λ t .
Let us note the obvious conclusion: at c 2 c 1 we have G   * ( t ) = G 2 ( 1 , t ) .
Example 2. Reliability model.
Let a system be defined in which the time of no-failure operation ξ is distributed according to the law F ( x ) = P { ξ < x } , F ̄ ( x ) = P { ξ x } . Suppose that a failure that occurs during the functioning of the system is independently detected (manifested) instantly.
At the initial moment t0=0 the system operation begins and the scheduled preventive update (preventive maintenance) of the system is assigned in time η≥ 0, distributed according to the law G ( x ) = P { η < x } , G ( 0 ) = 0 . The appointment of scheduled preventive updates of the system at random time means introduction of randomization in the decision making process, i.e. at the moment when decision should be made, the realization of τ random variable η ( =ητ ), distributed according to the law G ( x ) , and a scheduled preventive update of the system is performed after the time τ .
If the system has not failed by the appointed time η (the event {η <ξ } has occurred), then at the time η a planned preventive update of the system is started, which by assumption completely updates the system. Let us denote the duration of this scheduled preventive (prophylactic) update by γ 1 , and F 1 ( x ) = P { γ 1 < x } is a function of the distribution of this duration, F ̄ 1 ( x ) = P { γ 1 x } .
If the system has failed by the appointed time η , an unscheduled emergency update of the system begins at the time of the failure ξ . We denote the duration of this recovery operation by γ 2 , and denote the distribution law by F 2 ( x ) = P { γ 2 < x } , F ̄ 2 ( x ) = P { γ 2 x } . After possible remedial work, when the system is assumed to be completely renewed, the next precautionary remedial work is rescheduled and the entire maintenance process is repeated all over again.
Let us introduce a random process ξ (t), characterizing the state of the system at an arbitrary point in time t, by putting
ξ(t)=e0 if at time t the system is working properly;
ξ(t)=e1, if at time t the system undergoes a scheduled preventive update of the system;
ξ(t)=e3 if at time t an unscheduled emergency update of the system is performed.
The steady-state availability factor K г is defined as the probability of catching the system in an operable state at an infinitely distant moment of time. In [7], this characteristic in the notations adopted above is defined by equality
K г ( G ) = 0 + 0 u F ̄ ( y ) d y d G u 0 + 0 u F ̄ ( y ) d y + M γ 1 F ̄ ( u ) + M γ 2 F ( u ) d G u
The mathematical problem is to determine the distribution G * Ω for which
m a x G Ω K г ( G ) = K г ( G * ) = C
Let us investigate the function
С u = 1 C 0 u F ̄ ( y ) d y C M γ 1 F ̄ ( u ) + M γ 2 F ( u )
and obtain for the derivative the equality
d С u d u = 1 C F ̄ ( u ) C M γ 2 M γ 1 f ( u ) = = F ̄ ( u ) 1 C ) C M γ 2 M γ 1 λ u ,
where it is marked λ u = f u F ̄ ( u ) , f u = d F u d u Under the natural conditions 0 C 1 , M γ 2 M γ 1 and the assumption that the system is aging, that is, the function λ u is increasing, we obtain: the derivative changes sign from plus to minus no more than once.
Then
m a x G Ω K г ( G ) = m a x τ 0 0 τ 0 u F ̄ ( y ) d y d G 1 u + 0 τ F ̄ ( y ) d y G 2 τ G 1 τ + τ + 0 u F ̄ ( y ) d y d G 1 u 0 τ B u d G 1 u + B τ G 2 τ G 1 τ + τ + B u d G 1 u = = 0 τ 0 0 u F ̄ ( y ) d y d G 1 u + 0 τ 0 F ̄ ( y ) d y G 2 τ 0 G 1 τ 0 + τ 0 + 0 u F ̄ ( y ) d y d G 1 u 0 τ 0 B u d G 1 u + B τ 0 G 2 τ 0 G 1 τ 0 + τ 0 + B u d G 1 u
where B u = 0 u F ̄ ( y ) d y + M γ 1 F ̄ ( u ) + M γ 2 F ( u ) .
Thus, the distribution is set
G * u = G 1 u , 0 u τ 0 , G 2 u , τ 0 < u + ,
at which the maximum of the investigated functional is reached.

Author Contributions

Conceptualization, V.K. and A.B.; methodology, V.K.; software, O.Z.; validation, V.K., A.B. and O.Z.; formal analysis, A.B.; investigation, O.Z.; data curation, O.Z.; writing-original draft preparation, V.K.; writing-review and editing, A.B.; visualization, O.Z.; supervision, O.Z.; project administration, V.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kashtanov, V. A. The Structure of the Functional of Accumulation Defined on a Trajectory of Semi-Markov Process with a Finite Set of States Theory of Probability & Its Applications 2016 60:2, 281-294. [CrossRef]
  2. Kashtanov, V. A. "Discrete Distributions In Control Problems". Probabilistic Methods in Discrete Mathematics: Proceedings of the Fourth International Petrozavodsk Conference, Petrozavodsk, Russia, June 3-7, 1996, edited by, Berlin, Boston: De Gruyter, 1997, pp. 267-274. 3 June. [CrossRef]
  3. Kashtanov V. A., Zaitseva O. B. Research of operations (linear programming and stochastic models). Textbook. Course, INFRA-M, Moscow, 2016. – 256 p.
  4. Barzilovich E. Yu. Yu., Kashtanov V. A., Kovalenko I. N. On minimax criteria in reliability problems. - "Proceedings of the Academy of Sciences of the USSR. Ser. of Technical Cybernetics", 1971, No.3, pp. 87-98.
  5. Barzilovich E. Yu., Kashtanov V. A. Organization of Service with Limited Information about System Reliability. - M. "Sovetskoe Radio", 1975. – 135 p.
  6. Kashtanov, V.A., Zaitseva, O.B. & Efremov, A.A. Controlled Semi-Markov Processes with Constraints on Control Strategies and Construction of Optimal Strategies in Reliability and Safety Models. Math Notes 109, 585-592 (2021). [CrossRef]
  7. Kashtanov V.A., Medvedev A.I. Reliability Theory of Complex Systems. Moscow: Fizmatlit, 2010. – 608 p.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated