Loading [MathJax]/jax/output/HTML-CSS/jax.js
Preprint
Article

Optimal Control of Stochastic Dynamic Systems with Semi-Markov Parameters

Submitted:

18 February 2025

Posted:

19 February 2025

You are already at the latest version

A peer-reviewed article of this preprint also exists.

Abstract

The paper considers the generalization of the so-called Markov switching models to the case of generalized semi-Markov switching models, where main model is described by Ito stochastic differential equation. The synthesis problem of the optimal control for a stochastic dynamic system with the semi-Markov parameters is solved. To determine the corresponding functions for Bellman functional and optimal control the system of ordinary differential equations is investigated. The case of linear equations is considered in more detail with closed form of optimal control and corresponding model example.

Keywords: 
;  ;  

MSC:  60J25; 93E03; 93E20

1. Introduction

Optimal control synthesis plays a critical role in managing the dynamics of controlled objects or processes. This is particularly relevant when the dynamics are described by Ito stochastic differential equations (SDE). It should be noted that the choice of Ito SDE is due to their prevalence as applied models and their widespread use in biology, chemistry, telecommunications, etc. The main type of random perturbations in stochastic differential equations is the symmetric Wiener process, the trajectories of which have the reflection property, which very well characterizes the symmetry of this process. The symmetry of the solution of the stochastic differential equation is also well reflected in the example of this work, where the symmetry of the solution trajectories is traced relative to the so-called averaged trajectory, and optimal control does not violate this property. The paper [1] considers the optimal control of linear SDE of general type perturbed by a random process with independent increments and a quadratic quality functional. It is shown that under certain conditions the optimal control is linear and can be determined by solving additional vector quadratic problem. The optimal solution of the deterministic problem with feedback is obtained. In paper [2], the theory of optimal control for stochastic systems whose performance is measured by the exponent of an integral form is developed. In paper [3], the problem of the existence of optimal control for stochastic systems with a nonlinear quality functional is solved. Main model in the work [4] is the linear autonomous SDE in the form,
g ( x , u ) = A 0 x + B 0 u + w k ( A 1 x + B 1 u ) ,
where w k i . i . d with E w k = 0 , V a r ( w k ) = 1 . It should be noted that the results of the paper are devoted to the quadratic quality functional and necessary and sufficient conditions for the stability of the systems (1), and the stability conditions are formulated in terms of Riccati-type jumping operator properties. Note that the linear case (1) is most often considered as a first approximation of the dynamics of a real phenomenon, since the optimal control in this case can be found in closed form and the approximations of the optimal control can be compared with the exact values.
In the article [5], the main attention is focused on SDE with external switches. In this paper, a number of important remarks are made on the calculation of the product (infinitesimal) operator of a random process given by a SDE with Markov switches.
In the article [6], the main attention is paid to SDE with external switches and Poisson perturbations. Using the Belman equation, sufficient conditions for the existence of optimal control for a general quality function are found, and a closed form of optimal control is found for the linear case with a quadratic quality function.
It should be noted that the transition from difference equations to differential equations gives rise to a number of complications in the synthesis of optimal control, since it leads to a transition from solving more complex dynamical systems such as the analog of the Lyapunov equation for nonautonomous systems. On the other hand, the presence of random variables of different structures leads to the use of the infinitesimal operator [7,8], the calculation of which will depend on the nature of the random variables that affect the calculation of the infinitesimal operator.
In this paper, we will focus on the use of semi-Markov processes [8] as the main source of external random disturbances. It should be noted that the use of semi-Markov processes significantly extends the range of application of theoretical results for many applied problems, since the condition of exponential distribution of the time spent in the state θ n for continuous Markov processes ξ ( t ) , t 0 , is very strict for many applied problems. A description based on a Markov process will be incorrect if, for example, an assumption is made about the minimum time spent in a particular state P ( θ n > t m i n ) = 1 , where t m i n is the minimum time spent in the state. In this case, the use of semi-Markov processes is more efficient, since it allows us to control the properties of the residence time, the size of the jump ξ ^ n and the interdependence between them based on the semi-Markov kernel Q ( y , B , t ) [8]. On the other hand, it should be noted that the use of Markov processes greatly simplifies the study of the system, since it requires estimation only on the basis of the intensities of the Markov process λ ( y ) and usually when studying systems with external Markov disturbances it is assumed that
0 < λ m i n λ ( y ) λ m a x < , y Y .
In this study, we will not focus on the asymptotic properties of the random process x ( t ) , t 0 , but will consider the problem of synthesizing optimal control on a finite interval [ 0 , T ] , so the article will not consider additional conditions on the semi-Markov process that ensure its ergodicity [7,9,10]. Instead, the main attention will be paid to the elementary study of the dynamics of the main process x ( t ) , t [ 0 , T ] , at a fixed value of the external perturbation ξ ( t ) = y , y Y , which allows us to more effectively study the optimal control based on the methods proposed for Markov processes.
In this paper, we consider a model example that illustrates the steps of synthesizing optimal control under the condition that the state residence time is determined by θ n = t m i n + P o i s s ( λ ( y ) ) , i.e., the state residence time is discrete and with probability 1 greater than t m i n .

2. Problem Statement

Consider a stochastic dynamical system defined on a probabilistic basis ( Ω , F , F , P ) [7,11]. The system is governed by the stochastic differential equation (SDE):
d x ( t ) = a ( t , ξ ( t ) , x ( t ) , u ( t ) ) d t + b ( t , ξ ( t ) , x ( t ) , u ( t ) ) d w ( t ) , t R + ,
with initial conditions
x ( 0 ) = x 0 R m , ξ ( 0 ) = y Y .
Here ξ ( t ) , t 0 , is a semi-Markov process with values in Y , which characterised by generator [9]
Q ( y , B , t ) = P ( y , B ) F y ( t ) , y Y , B β Y , t 0 ,
where P ( y , B ) specifies the distribution of jumps of the nested Markov chain ξ ^ n , n 0 [8], F y ( t ) – is the conditional distribution of time spent in the state y. It should be noted that the representation of the generator based on the splitting (4) greatly simplifies the main calculations in proving the main theoretical results of this paper, but allows generalization to the general case of a semi-Markov kernel Q ( y , B , t ) ; x : 0 , + × Ω R m ; w ( t ) , t 0 , is a standard Wiener process; a control u ( t ) : = u ( t , x ( t ) ) : [ 0 , T ] × R m R m is m-measured function from the set of admissible controls U[12]; the processes w and ξ are independent [7,11].
As in works [11,13], we assume that the measured by the set of variables functions a : R + × Y × R m × U R m and b : R + × Y × R m × U R m satisfy the boundedness condition and the Lipschitz condition
| a ( t , y , x , u ) | 2 + | b ( t , y , x , u ) | 2 C ( 1 + | x | 2 ) ,
| a ( t , y , x 1 , u ) a ( t , y , x 2 , u ) | 2 + | b ( t , y , x 1 , u ) b ( t , y , x 2 , u ) | 2 L | x 1 x 2 | 2 , x 1 , x 2 R m .
The semi-Markov process ξ has the following effect on the trajectories of the process x. Suppose that on the interval [ t 0 = 0 , t 1 ) the process ξ takes the value y 1 . Then the movement will occur due to the system
d x ( t ) = a ( t , y 1 , x ( t ) , u ( t ) ) d t + b ( t , y 1 , x ( t ) , u ( t ) ) d w ( t ) , t R + , x ( 0 ) = x 0 R m .
According to [10,11], if the conditions (5) and (6) are met, the system (7) has on the interval [ 0 , t 1 ) a unique solution with finite second moment up to stochastic equivalence.
Then, at time t 1 , the value of the process ξ changes: y 1 y 2 . Then, on the interval [ t 1 , t 2 ) , the motion will occur due to the system
d x ( t ) = a ( t , y 2 , x ( t ) , u ( t ) ) d t + b ( t , y 2 , x ( t ) , u ( t ) ) d w ( t ) , t R + , x ( t 1 ) = x ( t 1 ) R m .
According to [10,11], if conditions (5), (6) are met, system (8) has a unique solution with a finite second moment on the interval [ t 1 , t 2 ) .
Thus, conditions (5), (6) guarantee the existence of a unique solution to the Cauchy problem (2), (3) on the interval [ 0 , + ) = k = 0 [ t k , t k + 1 ) , 0 = t 0 , t 1 , . . . , t k , . . . , the second moment for which is finite. Thus, for the existence of a solution on [ 0 , ) , we will assume that the semi-Markov process is defined on [ 0 , ) , i.e. T > 0 : P ( ξ ( T ) e x i s t ) = 1 .

3. Sufficient Conditions for Optimality

We introduce a sequence of functions v k ( t , x ) : [ t k , T ] × R m R 1 , k 0 , and class V : = { v k ( t , x ) C 1 , 2 ( R + × R m ) } .
On the functions v k ( t , x ) V we define the weak infinitesimal operator (WIO)
L v k ( t , x ) = lim Δ 0 + 1 Δ { E v k ( t + Δ , x ( t + Δ , t k , y , u ) ) v k ( t , x ) } ,
where x ( t ) = x ( t , t k , y , u ) is the strong solution (2) on the interval t [ t k , t k + 1 ) with control u = u k U .
The problem of optimal control is to find a control u k 0 , k 0 , from the set U that minimizes the scalar quality functional [12]
I ( u k 0 ) = I u k 0 ( t , x ) = inf u U k = 0 N E t k , x k F ( x ( T ) ) + t T G ( s , x ( s ) , u ( s ) ) d s ,
for some fixed T > 0 , F ( x ) 0 , G ( t , x , u ) 0 , E t , x { f } = E { f / x ( t ) = x } , and N = m a x { k : t k < T } .
To obtain sufficient conditions for optimality, we need to prove several auxiliary statements.
Lemma 1. Let:
1) there exists a unique solution to the Cauchy problem (2), (3), whose second moment is finite for each t;
2) a sequence of functions v k : [ 0 , T ] × R m R 1 , k 0 , from class V exists;
3) for s [ t k , T ) , the WIO L v k ( s , x ) is defined on the solutions of (2), (3).
Then t ˜ 1 , t ˜ 2 [ t 0 , T ] the equality
E t k , x k v k ( t ˜ 2 , x ( t ˜ 2 ) ) E t k , x k v k ( t ˜ 1 , x ( t ˜ 1 ) ) = t ˜ 1 t ˜ 2 E t k , x k L v k ( s , x ( s ) ) d s .
met.
Proof. 
For the Markov process x ( t , t k , y , u ) with respect to the σ -algebra F t ˜ 1 , t ˜ 2 constructed on the interval [ t ˜ 1 , t ˜ 2 ] , t ˜ 1 , t ˜ 2 [ t k , T ] , the following Dynkin formula [7] holds
E t k , x k v k ( t ˜ 1 + τ ( t ˜ 2 ) , x ( t ˜ 1 + τ ( t ˜ 2 ) , t k , y , u ) )
= v k ( t , x ) + E t k , x k t 0 τ r ( t ˜ 2 ) L v k ( t ˜ 1 + τ , x ( t ˜ 1 + τ , t k , y , u ) ) d τ ,
where τ r = inf t { | x ( t ) | > r } and τ r ( t ˜ 2 ) = min { τ r , t ˜ 2 } .
If lim r τ r ( t ˜ 2 ) = t ˜ 2 then for the solution of the problem (2), (3) we get the following equality
E t k , x k v k ( t ˜ 1 + t ˜ 2 , x ( t ˜ 1 + t ˜ 2 , t k , y , u ) ) = v k ( t , x ) + E t k , x k t 0 t ˜ 2 L v k ( t ˜ 1 + τ , x ( t ˜ 1 + τ , t k , y , u ) ) d τ .
Similarly, write the Dynkin formula on the interval [ t 0 , t 1 ] and, subtracting it from (12), we obtain (11). Lemma 1 is proved. □
Lemma 2. Let:
1) conditions 1) and 2) of Lemma 1 are fulfilled;
2) for t [ t 0 , T ] , v k V , k 0 has the sense the equation
L v k ( t , x ) + G ( t , x , u k ( t , x ) ) = 0 ,
with a boundary condition
v k ( T , x ) = 0 , k 0 ,
where L v k ( t , x ) is the WIO defined by (9).
Then v k V , k 0 , can be write as
v k ( t , x ) = E t k , x k t T G ( s , x , u ( s , x ) ) d s , t [ t k , T ] .
Proof. 
Consider the solution x ( t ) R m of the problem (2), (3) for t [ t k , T ] , constructed according to the corresponding initial condition.
We integrate (13) respect to s from t k to T and calculate the mathematical expectation. We get
E t k , x k t k T L v k ( s , x ) d s + E t k , x k t k T G ( s , x , u k ( s , x ) ) d s = 0 .
According to Lemma 1, there exists a first term (16) that is equal to the increment (11):
E t k , x k t k T L v k ( s , x ) d s = E t k , x k v k ( T , x ( T ) ) E t k , x k v k ( t , x ) = v k ( t , x ) ,
where E t k , x k v k ( T , x ( T ) ) = v k ( T , x ( T ) ) = 0 according to (14), and E t k , x k v k ( t , x ( t ) ) = v k ( t , x ) . Thus,
E t k , x k t k T L v k ( s , x ) d s = v k ( t , x ) .
Substituting (17) into (16), we obtain the statement of Lemma 2. □
Theorem 1.
Let:
1) there exists a unique solution to the Cauchy problem (2), (3), whose second moment is finite for each t;
2) there exists a sequence of functions v k V , k 0 , and an optimal control u k 0 U , k 0 , that satisfy the equation
L v k ( t , x ) + G ( t , x , u k 0 ( t , x ) ) = 0 ,
with a boundary condition
v k ( T , x ) = F ( x ( T ) ) ;
3) t [ 0 , T ] , u k U , k 0 , the following inequality holds
L v k ( t , x ) + G ( t , x , u k ( t ) ) 0 ,
where L = L ( t , x , u ) is the WIO (9) on the solutions of (2), (3).
Then the control u k 0 is optimal and t [ t 0 , T ] ,
I u k 0 ( t , x ) = inf u U I u ( t , x ) = v k ( t , x ) .
The sequence of functions v k ( t , x ) is called the control cost or Bellman function, and equation (18) can be written as the Bellman equation
inf u U L ( t , x k , u ) v k ( t , x k ) + G ( t , x k , u ) = 0 .
Proof. 
An optimal control is also an admissible control. Therefore, there exists a solution x ( t k , t 0 , y , u k 0 ) for which (18) takes the form
L v k ( t , x ( t , t k , y , u k 0 ) ) + G ( t , x ( t , t k , y , u k 0 ) , u k 0 ( t , x ) ) = 0 ,
where u k 0 taken at the point ( t , x ( t , t k , y , u k 0 ) ) , t [ t k , t k + 1 ) .
We integrate (23) from t to T, calculate the mathematical expectation, and, taking into account (19), we obtain
v k ( t , x ) = E t k , x k F ( x ( T ) ) + t T L v k ( τ , x ( τ , t 0 , y , u k 0 ) ) d τ .
Now let u k = u k ( t ) be an arbitrary control from the class of admissible controls U. Then, according to condition 3) of the theorem, the following inequality holds u k U , k 0 ,
L v k ( τ , x ( τ , t 0 , y , u k ) ) + G ( τ , x ( τ , t 0 , y , u k ) , u k ( τ ) ) 0 ,
We integrate (24) over τ [ t 0 , T ] , and calculate the mathematical expectation E at fixed τ and initial value x. Accounting Lemmas 1 and 2, we obtain
v k ( t , x ( t , t 0 , y , u k 0 ) )
E t k , x k F ( x ( T , u k ) ) + t k T G ( τ , x ( τ , t 0 , y , u k ) , u k ( τ ) ) d τ = v k ( t , x ( t , t 0 , y , u k ) ) .
And this, in fact, is the definition of optimal control u k 0 ( t , x ) in the sense of minimizing the quality functional I u ( t , x ) . Theorem 1 is proved. □

4. General Solution of the Optimal Control Problem

The following theorem holds.
Theorem 2.
The weak infinitesimal operator on the solutions of the Cauchy problem (2), (3) on the functions v k ( t , x ) V is calculated by the formula
L v k ( t , x ) = v k ( t , x ) t + ( v k ( t , x ) , a ( t , y , x , u ) )
+ 1 2 S p ( b T ( t , y , x , u ) · 2 v k ( t , x ) · b ( t , y , x , u ) )
+ i j N R m v k ( t , z ; j ) p i j ( t , z | x ) d z v k ( t , x ; i ) q i j ,
where ( · , · ) is a scalar product, ( v k ) : = ( ( v k / x 1 ) , . . . , ( v k / x m ) ) T , ( 2 v k ) : = [ 2 v k / x i x j ] i , j = 1 m , k 0 , < < T >> is a transpose sign, S p is a matrix trace,
q i j = lim t 0 Q ( i , j , t ) t = P ( i , j ) F x ( 0 ) ,
p i j ( t , z | x ) is a conditional density of the process x ( t ) for ξ ( t ) = i , ξ ( t ) = j :
P x ( t ) [ z , z + d z ] | x ( t ) = x , ξ ( t ) = i , ξ ( t ) = j = p i j ( t , z | x ) d z + o ( d z ) .
In the last term in the formula (25) in the function v k ( t , x ; i ) the last argument indicates the value of the semi-Markov process at a given time, i.e. ξ ( t ) = y i .
Proof. 
The first three terms can be obtained in the same way as in [14]. Let us obtain the form of the last term, which is related to the semi-Markov parameter ξ . To do this, consider the hypotheses H i j = { ξ ( t ) = i , ξ ( t ) = j } , i , j Y . Considering the hypotheses H i j , we obtain that the last term will correspond to the hypotheses H i j , i j , i.e.
E v k ( t + Δ , x ( t + Δ , t k , y , u ) ) = i , j Y E v k ( t + Δ , x ( t + Δ , t k , y , u ) ) I H i j
= i = j Y E v k ( t + Δ , x ( t + Δ , t k , y , u ) ) I H i j + i j Y E v k ( t + Δ , x ( t + Δ , t k , y , u ) ) I H i j .
For the first term, the absence of a change in the state of the semi-Markov process, the conditions imposed on the coefficients of the initial equation allow us to write
lim Δ 0 1 Δ i = j Y E v k ( t + Δ , x ( t + Δ , t k , y , u ) ) I H i j v k ( t , x )
= v k ( t , x ) t + ( v k ( t , x ) , a ( t , y , x , u ) ) + 1 2 S p ( b T ( t , y , x , u ) · 2 v k ( t , x ) · b ( t , y , x , u ) ) .
On the other hand, when the state of the semi-Markov process ξ ( t ) changes at time t, taking into account the form of the conditional transition probability P x ( t ) [ z , z + d z ] | x ( t ) = x , ξ ( t ) = i , ξ ( t ) = j (26), we obtain
lim Δ 0 1 Δ i j Y E v k ( t + Δ , x ( t + Δ , t k , y , u ) ) I H i j v k ( t , x )
= i j N R m v k ( t , z ; j ) p i j ( t , z | x ) d z v k ( t , x ; i ) q i j .
Theorem 2 is proved. □
The first equation for finding v k 0 ( t , x ) , k 0 , can be obtained by substituting (25) into (18). We get
v k 0 ( t , x ; i ) t + v k 0 ( t , x ; i ) x T · a ( t , y i , x , u )
+ 1 2 S p b T ( t , y i , x , u ) · 2 v k 0 ( t , x ; i ) x 2 · b ( t , y i , x , u )
+ i j N R m v k 0 ( t , x ; j ) p i j ( t , z | x ) d z v k 0 ( t , x ; i ) q i j + G ( t , x , u ) = 0
with a boundary condition
v k 0 ( T , x ) = F ( x ) .
The second equation, for finding the optimal control u k 0 ( t , x ) , can be obtained from (27) by differentiating respect to u, since u = u k 0 , k 0 , delivers the minimum of the left-hand side of (27):
v k 0 ( t , x ; i ) x T · a ( t , y i , x , u ) u
+ 1 2 S p b ( t , y i , x , u ) u T · 2 v k 0 ( t , x ; i ) x 2 · b ( t , y i , x , u )
+ b T ( t , y i , x , u ) · 2 v k 0 ( t , x ; i ) x 2 · b ( t , y i , x , u ) u
+ G ( t , x , u ) u | u = u k 0 = 0 ,
where ( a / u ) is a m × m -Jacobian, which consists of the elements { ( a n / u s ) , n = 1 , m ¯ , s = 1 , m ¯ } ( ( b / u ) – similarly), ( G / u ) ( ( G / u 1 ) , . . . , ( G / u r ) ) , k 0 .
The solution of the system (27), (29) is a very difficult task even with the use of modern computer technologies. Therefore, it is advisable to consider a simplified version of the problem (2), (3), (10), namely, a linear system with a quadratic quality functional.

5. Synthesis of Optimal Control for a Linear Stochastic System

Consider the problem of optimal control for a linear stochastic dynamical system given by the stochastic differential equation
d x ( t ) = [ A ( t , ξ ( t ) ) x ( t ) + B ( t , ξ ( t ) ) u ( t ) ] d t + σ ( t , ξ ( t ) ) x ( t ) d w ( t ) , t R + ,
with the initial conditions
x ( 0 ) = x 0 R m , ξ ( 0 ) = y Y .
Here A , B , σ are piecewise continuous integral matrix-functions of appropriate dimension.
The problem of optimal control for the system (30), (31) is to find a control u k 0 , k 0 , from the set of admissible controls U such that minimizes the quadratic quality functional
I ( u k ) = I u k ( t , x ) = k = 0 N ¯ E t k , x k x T ( T ) M 0 ( ξ ( t ) ) x ( T )
+ t T [ u T ( s ) M 1 ( s , ξ ( s ) ) u ( s ) + x T ( s ) M 2 ( s , ξ ( s ) ) x ( s ) ] d s ,
M 1 ( t , ξ ( t ) ) is a uniformly positive definite respect to t [ 0 , T ] m × m -matrix, M 0 ( ξ ( t ) ) and M 2 ( t , ξ ( t ) ) are non-negatively definite m × m -matrices. To simplify the notation, we introduce the notation
A i ( t ) = A ( t , y i ) , B i ( t ) = B ( t , y i ) , σ i ( t ) = σ ( t , y i ) ,
M 0 i = M 0 ( y i ) , M 1 i ( t ) = M 1 ( t , y i ) , M 2 i ( t ) = M 2 ( t , y i ) .
Theorem 3.
The optimal control for the problem (30)–(32) has the next form:
u k 0 ( t , x ; i ) = M 1 i 1 ( t ) B i T ( t ) P i ( t ) x ( t ) ,
where the non-negatively definite m × m -matrix P i ( t ) = P ( t , ξ ( t ) ) defines the Bellman functional
v k 0 ( t , x ; i ) = x T ( t ) P i ( t ) x ( t ) + g ( t ) , v k 0 ( T , x ; i ) = x T ( t k ) M 0 i x ( t k ) .
Here g is a non-negative scalar function.
Proof. 
Bellman’s equation for (30)–(32) has the form
inf u U [ L v k 0 ( t , x ; i ) + u k T ( t , x ; i ) M 1 i ( t ) u k ( t , x ; i ) + x T ( t ) M 2 i ( t ) x ( t ) ) ] = 0 ,
where
L v k 0 ( t , x ; i ) = v k 0 ( t , x ; i ) t + g ˙ ( t ) + [ A i ( t ) x ( t ) + B i ( t ) u k ( t , x ; i ) ] T v k 0 ( t , x ; i )
+ 1 2 S p ( σ i T ( t ) · 2 v i 0 ( t , x ; i ) · σ i ( t ) )
+ i j N v k 0 ( t , x ; j ) v k 0 ( t , x ; i ) q i j .
Substitute (36) into (35):
v k 0 ( t , x ; i ) t + g ˙ ( t ) + [ A i ( t ) x ( t ) + B i ( t ) u k ( t , x ; i ) ] T v k 0 ( t , x ; i )
+ 1 2 S p ( σ i T ( t ) · 2 v k 0 ( t , x ; i ) · σ i ( t ) )
+ i j N v k 0 ( t , x ; j ) v k 0 ( t , x ; i ) q i j + u k T ( t , x ; i ) M 1 i ( t ) u k ( t , x ; i ) + x T ( t ) M 2 i ( t ) x ( t ) ) = 0 .
The form of the optimal control is obtained by differentiating (37) since u k ( t , x ; i ) = u k 0 ( t , x ; i ) minimizes the left side (37):
u k 0 ( t , x ; i ) = 1 2 M 1 i 1 ( t ) B i T ( t ) v k 0 ( t , x ; i ) ,
where
v k 0 ( t , x ; i ) = 2 P i ( t ) x ( t ) .
Therefore,
u k 0 ( t , x ; i ) = M 1 i 1 ( t ) B i T ( t ) P i ( t ) x ( t ) .
Theorem 3 is proved. □

6. Construction of the Bellman Equation

Substituting (33) and (34) into equation (35), we obtain the following equation for t [ t k , t k + 1 ) :
x T ( t ) d P i ( t ) d t x ( t ) + g ˙ ( t )
+ 2 [ A i ( t ) x ( t ) B i ( t ) M 1 i 1 ( t ) B i T ( t ) x ( t ) ] P i ( t ) x ( t ) + S p ( σ i T ( t ) P i ( t ) σ i ( t ) )
+ i j N x T ( t ) P j ( t ) x ( t ) x T ( t ) P i ( t ) x ( t ) q i j
+ [ M 1 i 1 ( t ) B i T ( t ) P i ( t ) x ( t ) ] T M 1 i ( t ) M 1 i 1 ( t ) B i T ( t ) P i ( t ) x ( t ) + x T ( t ) M 2 i ( t ) x ( t ) = 0 .
Equating to zero the quadratic form in x and expressions that do not depend on x, taking into account the matrix equality 2 x T P i x = x T ( P i A i + A i T P i ) x , we obtain a system of differential equations for finding the matrices P i ( t ) , t [ t k , t k + 1 ) , k 0 :
d P i ( t ) d t + A i T ( t ) P i ( t ) + P i ( t ) A i ( t ) B i ( t ) M 1 i 1 ( t ) B i T ( t ) P i ( t ) + i j N P j ( t ) P i ( t ) q i j
+ [ M 1 i 1 ( t ) B i T ( t ) P i ( t ) ] T B i T ( t ) P i ( t ) + M 2 i ( t ) = 0 ,
S p ( σ i T ( t ) P i ( t ) σ i ( t ) ) + g ˙ ( t ) = 0 ,
with a boundary condition
P i ( T ) = M 0 i .
Thus, we can formulate the following theorem.
Theorem 4.
If the quality functional for the system (30), (31) is (32), and the control cost is (34), then the system of differential equations for finding the matrices P i ( t ) , t [ t k , t k + 1 ) , k 0 , has the form (39)–(41).
Next, we will prove the solvability of the system (39)–(41). Let us use the Bellman iteration method [15]. To simplify the calculations, we consider the interval [ t k , t k + 1 ) where ξ ( t ) = y i ) and omit the index k for v , u and P. Let us define the zero approximation
u 0 ( t , x ) = M 1 1 ( t ) B T ( t ) P 0 ( t ) x ( t ) ,
where P 0 ( t ) 0 is a bounded piecewise continuous matrix. Substitute (42) into (27) and find the value of v 1 ( t , x ) from the resulting equation, which corresponds to the control of (42).
Next, we substitute v 1 ( t , x ) into the Bellman equation (22) and find the control u 1 ( t , x ) that minimizes (22).
Continuing this process, we obtain a sequence of controls u n ( t , x ) and functional v n ( t , x ) in the form
u n ( t , x ) = M 1 1 ( t ) B T ( t ) P n ( t ) x ( t ) , v n ( t , x ) = x T ( t ) P n ( t ) x ( t ) , v n ( T , x ) = x T ( t k ) M 0 x ( t k ) ,
where P n ( t ) , t [ t k , t k + 1 ) is the solution of the boundary value problem (39)–(41) for T : = t k + 1 .
For n 0 the next estimate is correct
v n 1 ( t , x ) v n ( t , x ) , t [ t k , t k + 1 ) .
The convergence of the functions v n ( t , x ) to v 0 ( t , x ) , the controls u n ( t , x ) to u 0 ( t , x ) , and the convergence of the sequence of matrices P n ( t ) to P ( t ) can be proved using (44) [12].
The next estimate is correct:
max t [ t k , t k + 1 ) P ( t ) P n ( t ) C n ! , C < + , k 1 .
Thus, the following theorem holds.
Theorem 5.
The approximate solution of the optimal control synthesis problem for the problem (30)–(32) is carried out using successive Bellman approximations, where the n-th approximation of the optimal control and the Bellman functional for each interval [ t k , t k + 1 ) , k 0 , is given by the formula (43). In this case, the error is estimated by the inequality (46).

7. Model Example

Consider the system
d x ( t ) = [ a ( ξ ( t ) ) x ( t ) + b ( ξ ( t ) ) u ( t ) ] d t + σ ( ξ ( t ) ) x ( t ) d w ( t ) , t R + ,
with the initial condition
x ( 0 ) = 1 , ξ ( 0 ) = y 1 .
Here, ξ is a semi-Markov process with two states y 1 = 1 , y 2 = 2 with transition probabilities for a nested Markov chain q 12 = q 21 = 1 and time in state θ n = 0.2 + P o i s ( λ ( y i ) ) , where λ ( y 1 ) = 0.5 , λ ( y 2 ) = 1.3 . a ( y 1 ) = a 1 = 1 , a ( y 2 ) = a 2 = 0.1 , b ( y 1 ) = b 1 = 2 , b ( y 2 ) = b 2 = 0.5 , σ ( y 1 ) = σ 1 = 0.5 , σ ( y 2 ) = σ 2 = 0.8 .
The matrices from the quality functional (32) are assumed to be equal to
M 01 = 3 , M 02 = 1 , M 11 = M 12 = 2 , M 21 = 1 , M 22 = 2
The Bellman functional will be found in the form
v k 0 ( t , x ; i ) = x 2 ( t ) P i + g ( t ) .
In this case, the system (39)–(41) has the form
2 a i P i b i 2 M 1 i 1 P i + P j ( t ) P i ( t ) q i j + M 1 i 1 b i 2 ( t ) P i 2 + M 2 i ( t ) = 0 , i , j = 1 , 2 , i j ,
σ i 2 P i + g ˙ ( t ) = 0 ,
with a boundary condition
x 2 ( T ) P i + g ( T ) = x 2 ( T ) M 0 i .
Or
2 P 1 2 P 1 + P 2 + 1 = 0 , 0.125 P 2 2 0.925 P 2 + P 1 + 2 = 0 ,
Where P 1 0.847 , P 2 1.587 .
The optimal control is as follows
u k 0 ( t , x ; 1 ) = 0.847 x ( t ) , u k 0 ( t , x ; 2 ) = 0.397 x ( t ) .
The realization of the solution of the system (46)–(47) without control and under the influence of optimal control is shown in Figure 1.
Figure 1. Solution of the system (46)–(47) with the given values of the coefficients.
Figure 1. Solution of the system (46)–(47) with the given values of the coefficients.
Preprints 149788 g001

8. Discussion

The main focus of this paper is on theoretical derivations of the optimal control system for stochastic differential equations in the presence of external perturbations described by semi-Markov processes. This generalization allows us to more accurately describe the dynamics of real processes under various kinds of restrictions on the spend time θ n in states, which is impossible in the case of a Markov process. In Theorem 2, we find an explicit form of the infinitesimal operator, which is determined on the basis of the coefficients of the original equation and the characteristics of the semi-Markov process. This representation allows us to synthesize the optimal control u 0 ( t , x ) based on the Belman equation (18) with the boundary condition (19). For the linear case of the system of the form (30), the search for optimal control is carried out on the basis of solving the Riccati equation (39), which also arises in the case of the presence of a Markovian external perturbation.
The main focus in the following works for dynamical systems with semi-Markovian external perturbations will be on taking into account the ergodic properties of the semi-Markovian process ξ ( t ) , t 0 when analyzing the asymptotic behavior of the system. In contrast to systems with Markovian external switches, where the ergodic properties of ξ ( t ) were described on the basis of intensities λ ( y ) , for the semi-Markovian case, conditions on the times of steady-state and jumps will play an important role. Thus, the parameter estimate of model (2) will have not only an estimate of the parameters a ( · ) , b ( · ) , but also an estimate of the distribution of the residence time in the states. Therefore, the following algorithm can be proposed for system analysis and parameter estimation:
  • Estimation of switching moments
    τ n = i = 1 n θ n ;
    This estimation can be realized using a generalized unit root test developed for time series [16];
  • Estimation state spase for semi-Markov process ξ ( t ) ,
    Y = { 1 , 2 , . . . , N } ;
  • Estimation coefficients a ( · ) , b ( · ) for SDE (2);
The presented framework of stochastic dynamic systems with semi-Markov parameters offers a promising tool for systems biology and systems medicine. In systems biology, it can help model complex molecular interactions, such as oxidative stress and mitochondrial dysfunction, influenced by stochastic perturbations. In systems medicine, this approach supports personalized treatment strategies by capturing patient-specific dynamics. For instance, it can predict disease progression and optimize therapies in conditions like Parkinson’s Disease. By integrating theoretical modeling with clinical data, this framework bridges the gap between understanding disease mechanisms and advancing precision medicine.

9. Conclusions

In this paper, we solve the problem of synthesis of optimal control for stochastic dynamical systems with semi-Markov parameters. In the linear case, an algorithm for finding the optimal control is obtained and its convergence is substantiated.
[custom]

Funding

This research was supported by ELIXIR-LU https://elixir-luxembourg.org/, the Luxembourgish node of ELIXIR, with funding and infrastructure provided by the Luxembourg Centre for Systems Biomedicine (LCSB). LCSB’s support contributed to the computational analyses and methodological development presented in this study.

Acknowledgments

The authors would like to acknowledge the institutional support provided by the Luxembourg Centre for Systems Biomedicine (LCSB) at the University of Luxembourg and Yuriy Fedkovych Chernivtsi National University, which facilitated the completion of this work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lindquist, A. Optimal control of linear stochastic systems with applications to time lag systems. Information Sciences 1973, 5, 81–124. [Google Scholar] [CrossRef]
  2. Kumar P., R.; van Schuppen J., H. . On the optimal control of stochastic systems with an exponential-of-integral performance index. Journal of Mathematical Analysis and Applications, 1981; 80, 312–332. [Google Scholar]
  3. Buckdahn, R.; Labed, B.; Rainer, C.; Tamer, L. Existence of an optimal control for stochastic control systems with nonlinear cost functional. Stochastics 2010, 82(3), 241–256. [Google Scholar] [CrossRef]
  4. Dragan, V.; Popa, I.-L. The Linear Quadratic Optimal Control Problem for Stochastic Systems Controlled by Impulses. Symmetry 2024, 16, 1170. [Google Scholar] [CrossRef]
  5. Das, A.; Lukashiv, T. O.; Malyk, I. V. Optimal control synthesis for stochastic dynamical systems of random structure with the markovian switchings. Journal of Automation and Information Sciences 2017, 4(49), 37–47. [Google Scholar] [CrossRef]
  6. Antonyuk, S. V.; Byrka, M. F.; Gorbatenko, M. Y.; Lukashiv, T. O.; Malyk, I. V. Optimal Control of Stochastic Dynamic Systems of a Random Structure with Poisson Switches and Markov Switching. Journal of Mathematics 2020. [Google Scholar] [CrossRef]
  7. Dynkin, E.B. Markov Processes; Academic Press: New York, USA, 1965. [Google Scholar]
  8. Koroliuk, V.; Limnios, N. Stochastic Systems in Merging Phase Space; World Scientific Publishing Co Pte Ltd: Hackensack, NJ, USA, 2005. [Google Scholar]
  9. Ibe, O. Markov Processes for Stochastic Modelling. 2nd Edition; Elsevier: London, UK, 2013. [Google Scholar]
  10. Gikhman, I. I.; Skorokhod, A. V. Introduction to the Theory of Random Processes; W. B. Saunders: Philadelphia, PA, USA, 1969. [Google Scholar]
  11. Øksendal, B. Stochastic Differential Equation. Springer: New York, USA, 2013.
  12. Kolmanovskii V., B.; Shaikhet L., E. Control of Systems with Aftereffect; American Mathematical Society: Providence, RI, USA, 1996. [Google Scholar]
  13. Jacod, J.; Shiryaev, A. N. Limit Theorems for Stochastic Processes. Vols. 1 and 2; Fizmatlit: Moscow, Russia, 1994. (in Russian) [Google Scholar]
  14. Lukashiv, T. One Form of Lyapunov Operator for Stochastic Dynamic System with Markov Parameters. Journal of Mathematics 2016. [Google Scholar] [CrossRef]
  15. Bellman, R. Dynamic Programming; Princeton University Press: Princeton, NJ, USA, 1972. [Google Scholar]
  16. Narayan, P.; Popp, S. A New Unit Root Test with Two Structural Breaks in Level and Slope at Unknown Time. Journal of Applied Statistics 2010, 37, 1425–1438. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.

Downloads

32

Views

35

Comments

0

Subscription

Notify me about updates to this article or when a peer-reviewed version is published.

Email

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2025 MDPI (Basel, Switzerland) unless otherwise stated