Preprint
Article

Operational Matrix of New Shifted Wavelet Functions for Solving Optimal Control Problem

Altmetrics

Downloads

128

Views

90

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

09 June 2023

Posted:

12 June 2023

You are already at the latest version

Alerts
Abstract
The present work concerns with presenting an explicit formula for new shifted wavelet (NSW) functions. Wavelet functions have many applications and advantages in both applied and theoretical fields. They are formulated with different orthogonal polynomials to construct new techniques for treating some problems in sciences, and engineering. A new important differentiation property of NSW in terms of NSW themselves is obtained and proved in this paper. Then it is utilized together with the state parameterization technique to find solution of optimal control problem (OCP) approximately. The suggested method converts the OCP into a quadratic programming problem, which can be easily determined on computer. As a result, the approximate solution closes with the exact solution even with a small number of NSW utilized in estimation. The error bound estimation for the proposed method is also discussed. Some test numerical examples are solved to demonstrate the applicability of the suggested method. For comparison, the exact known solutions against the obtained approximate results are listed in Tables.
Keywords: 
Subject: Computer Science and Mathematics  -   Mathematics

1. Introduction

Wavelet functions play an interesting role in areas of mathematics. These functions have been applied in the solution of approximation theory, differential equations and integral equations [1,2,3,4]. The study of optimal control problems is important in our life and their applications are found in different disciplines based on mathematical modeling, chemistry and physics. The solution of optimal control problem can be found approximately because of the complexity inmost applications. Numerous studies have focused on the approximate solutions of optimal control problems, which can be found in many fields [5,6,7,8,9]. Different algorithms were used for solving optimal control problems, including the indirect modified pseudospectral method [10], A direct Chebyshev cardinal functions method [11], Cauchy discretization technique [12], the synthesized optimal control technique [13], Legendre functions method [14], Evolutionary Algorithm-Control Input Range Estimation [15]. See [16,17,18,19,20] for some other articles exploring various optimal control problems. Wavelet functions have important parts in the approximation theory, special functions and for numerical analysis for solving optimal control problems. In particular the Chebyshev wavelets families are widely applied in contributions to the field of approximation theory. For example, the authors in [21] employed the Boubaker wavelets together with the operation matrix of derivative to solve singular initial value problem. The collocation method is presented in [22] based on the second kind Chebyshev wavelets for solving calculus of variation problems. The use of the operational matrices of derivatives and integrals has been highlighted in the field of numerical analysis [23–25]. This utilization gives special algorithms to obtain accurate approximate solutions of many types of differential and integral equations with flexible computations. To extract an operational matrix of derivatives is based on choosing suitable basis functions in terms of celebrated special functions and expressing the first derivative of these basis functions in terms of their original types. Motivated by the above discussion, we are mainly interested in presenting new shifted wavelet functions with some important properties. A novel iterative method is suggested in this work to solve optimal control problem. Such method is used together with NSW as a basis function to parameterize the states variables. The proposed technique is constructed to reach simultaneously the accuracy and efficiency. Hence the first goal of this work is to introduce NSW. Use the proposed new basis function to parameterize the system state variables to solve some problem in optimal control. The rest of the work is organized as follows: section two provides the definition of NSW. In Section three, the convergence of the NSW is studied. General exact formula of NSW differentiation operational matrix of is generated in section four, then the suggested algorithm to solve optimal control problem is illustrated in section five. Section six discusses the application of the NSW by considering various examples in optimal control. Simulation results are also given in section seven, followed by a conclusion remarks summarizes in section seven.

2. The New Shifted Wavelet Functions

Wavelet functions have been used successfully in scientific and engineering fields. The special new shifted wavelet functions can be defined as below
Q n m ( x ) = { 2 k 1 2 π M s m ( 2 k x 2 n + 1 )             n 1 2 k 1 x n 2 k 1 , 0                                                                                       otherwise .
where n = 1 ,   2 ,   , 2 k ; k can be assumed to be any positive integer, m is the degree of the shifted polynomials and x denotes the time for n = 0 ,   1 ,   ,   M .
Note that a recursive relation that yields the M s m ( x ) polynomials is:
M s m ( x ) = ( 2 x 1 ) M s m 1 ( x ) M s m 2 ( x ) ,     m = 2 , 3 , 4 ,
with initial conditions:
M s 0 ( x ) = 2 ,   M s 1 ( x ) = 2 x 1.
where M s m ( t ) = 2 cos m θ , t = cos θ , W n k ( t ) = W ( 2 k 1 t 2 n + 1 )

3. Convergence Analysis of New Wavelet Functions

A function approximation f C ω * 2 [ 0 , 1 ) , with | f ¨ ( x )   | L , L > 0 may by expanded in terms of new shifted wavelets as below
f ( x ) = n = 1 m = 0 c n m φ n m ( x ) .
where
c n m = ( f ( x ) , φ ( x ) )
In (5), the symbol ( . , . ) w n ( x ) is denoted the inner product operator with respect to weighted function ω n ( x ) on Hilbert space over the interval [ 0 , 1 ] . If the infinite series in (4) is truncated, then the solution f ( x ) can be rewritten in matrix form as below
f ( x ) = n = 1 2 k 1 m = 0 M c n m φ n m ( x ) = c T φ ( x ) ,
where   φ ( x )   and   C   are   matrices   of   2 k 1 M × 1   dimensions ,   given   by   c = [ c 1 , 0   c 1 , 1     c 1 , M   c 2 , 0 c 2 , M   c ( 2 k 1 ) , 0   c ( 2 k 1 ) , M ] , and φ ( x ) = [ φ 1 , 0   φ 1 , 1   φ 1 , M     φ 2 , 0 φ 2 , M     φ ( 2 k 1 ) , 0   φ ( 2 k 1 ) , M ] T .
Note that, both k and n are integer numbers, m   is the degree of shifted polynomials. Now, we state and prove a theorem to ensure the convergence of the new shifted wavelet expansion of a function.
Theorem 1.
Assume that a function f ( x ) L w 2 ( [ 0 , 1 ] )  where ( t ) =   1 1 t 2 ,  t ± 1  with bounded second derivative | f ¨ ( x ) | L , L > 0 , f  can be expanded as an infinite series of the new shifted wavelets (1), then   C n m in (4) converges uniformly to   f ,     i.e C n m satisfy the inequality:
| c n m | L 1 n 3 2 2 3 2 ( π ( m 2 1 ) )
Proof. 
Let f ( x ) = n = 1 m = 0 c n m φ n m ( x )
It follows that for   k = 1 , 2 , 3 , ;   n = 1 , 2 , , 2 k ,   m = 0 , 1 , , M
c n m = ( f ( x ) , φ ( x ) )   = 0 1 f ( x ) φ n m ( x ) W k ( x ) d x
= 0 n 1 2 k 1 f ( x ) φ n m ( x ) W k ( x ) d x   + n 1 2 k 1 n 2 k 1 f ( x ) φ n m ( x ) W k ( x ) d x
                                                                                                              + n 2 k 1 1 f ( x ) φ n m ( x ) W k ( x ) d x
Using Eq. 1, one can get
c n m = n 1 2 k 1 n 2 k 1 f ( x ) 2 k 1 2 π M s m ( 2 k x 2 n + 1 ) W ( 2 k x 2 n + 1 ) d x
If m > 1 , by substituting
2 k x 2 n + 1 = c o s θ , x = c o s θ + 2 n 1 2 k , d x = s i n θ 2 k d θ
c n m = 2 k 1 2 π 0 π f ( c o s θ + 2 n 1 2 k ) 2 c o s   m θ   1 1 c o s 2 θ s i n θ 2 k d θ
c n m = 2 ( k + 1 ) 2 π 0 π f ( c o s θ + 2 n 1 2 k ) c o s   m θ   d θ
By using method of integration by parts, let
0 π u d v = u v 0 π v d u u = f ( c o s θ + 2 n 1 2 k ) d u = f ˙ ( c o s θ + 2 n 1 2 k ) ( s i n θ 2 k ) d v = cos m θ   d θ v = sin m θ m ,     m 1
c n m = 2 ( k + 1 ) 2 π f ( c o s θ + 2 n 1 2 k ) ( sin m θ m ) ] 0 π   2 ( k + 1 ) 2 m 2 k π 0 π f ˙ ( c o s θ + 2 n 1 2 k ) sin m θ   s i n θ   d θ
Using again the method of integration by parts, let
u = f ˙ ( c o s θ + 2 n 1 2 k ) d u = f ¨ ( c o s θ + 2 n 1 2 k ) ( s i n θ 2 k ) d v = sin m θ   s i n θ   d θ ,
v = ( s i n ( m 1 ) θ m 1 s i n ( m + 1 ) θ m + 1 )
c n m = 2 ( k + 1 ) 2 m 2 k π f ˙ ( c o s θ + 2 n 1 2 k ) ( s i n ( m + 1 ) θ m + 1 + s i n ( m 1 ) θ m 1 ) ] 0 π
                      2 ( k + 1 ) 2 m 2 2 k π 0 π f ¨ ( c o s θ + 2 n 1 2 k ) s i n θ ( s i n ( m 1 ) θ m 1 s i n ( m + 1 ) θ m + 1 ) d θ
We have,
c n m = 2 ( k + 1 ) 2 m 2 2 k π 0 π f ¨ ( c o s θ + 2 n 1 2 k ) s i n θ ( s i n ( m 1 ) θ m 1 s i n ( m + 1 ) θ m + 1 ) d θ
Thus, we get
| c n m | = | 2 ( k + 1 ) 2 m 2 2 k π 0 π f ¨ ( c o s θ + 2 n 1 2 k ) s i n θ ( s i n ( m + 1 ) θ m + 1 + s i n ( m 1 ) θ m 1 ) d θ |
                2 ( k + 1 ) 2 m 2 2 k π 0 π | f ¨ ( c o s θ + 2 n 1 2 k ) s i n ( m + 1 ) θ m + 1 + s i n ( m 1 ) θ m 1 d θ |
                L 2 ( k + 1 ) 2 m 2 2 k π 0 π | s i n θ ( s i n ( m 1 ) θ m 1 s i n ( m + 1 ) θ m + 1 ) | d θ
However
0 π | s i n θ ( s i n ( m + 1 ) θ m + 1 + s i n ( m 1 ) θ m 1 ) | d θ
= 0 π | s i n θ ( s i n ( m + 1 ) θ m + 1 + s i n ( m 1 ) θ m 1 ) d θ |
0 π | s i n θ s i n ( m + 1 ) θ m + 1 | + | s i n θ ( s i n ( m 1 ) θ m 1 ) | d θ 2 m π ( m 2 1 )
Hence
| c n m | L 2 ( k + 1 ) 2 m 2 2 k π ( 2 m π ( m 2 1 ) )
| c n m | L 2 ( k + 1 ) 2 2 2 k ( 2 π ( m 2 1 ) )
Since n 2 k 1 , we   have   inequality   becoming  
| c n m | L 1 n 3 2 2 3 2 ( π ( m 2 1 ) )
Therefore; the wavelets expansion n = 1 m = 0 C n m φ n m ( x )  converges to f ( x )  uniformly.
Accuracy Analysis
If the function f ( x ) is expanded in terms of New Shifted Wavelet Functions as in Eqns. 3-4. That is
f ( x ) = n = 1 m = 0 c n m φ n m ( x )
Then it is not possible to perform the computation of an infinite number of terms; therefore we must truncate the series as below
  f M ( x ) = n = 1 2 k 1 m = 0 M 1 c n m φ n m ( x )
so that     f ( x ) f M ( x ) = r ( x )
where r(x) is the residual function defined by
  r ( x ) = n = 2 k 1 + 1 m = M c nm φ n m ( x )
We must select the coefficients such that r ( x ) is less than some convergence value ϵ   , that is
( 0 1 | f ( x ) f M ( x ) | 2 w n ( x ) d x ) 1 2 < ϵ
for all M greater than some positive integer value M 0 .
The calculation of the accuracy of a numerical method is crucial to describe the applicability and performance to solve problems. Theorem 2 discusses the accuracy of the Chebyshev wavelets representation of a function.
Theorem 2.
Let f  be a continuous function defined on the interval [0, 1) and | f ¨ ( x ) | < L ,  then the accuracy estimation is given by:
  C n , M = ( π L 2 3 2 n = 2 k 1 + 1 m = M 1 n 3 2 ( 1 ( m 2 1 ) )   ) 1 2
where    C n , M = ( 0 1 | r ( x ) | 2 w n ( x ) d x ) 1 2
Proof. 
Since C n , M = ( 0 1 | r ( x ) | 2 w n ( x ) d x ) 1 2
Then C n M 2 = 0 1 | r ( x ) | 2 w n ( x ) d x
                    = 0 1 n = 2 k 1 + 1 m = M | c n m φ n m ( x ) | 2 ω n ( x ) d x  
                    = n = 2 k 1 + 1 m = M | c n m | 2 0 1 | φ n m ( x ) | 2 w n ( x ) d x
From the orthonormality criterion from φ n m , one can get
C n M 2 = n = 2 k 1 + 1 m = M | c n m | 2
Using the findings from Eq. 7
C n M 2 = π L 2 3 2 n = 2 k 1 + 1 m = M 1 n 3 2 ( 1 ( m 2 1 ) )  
or C n , M = ( π L 2 3 2 n = 2 k 1 + 1 m = M 1 n 3 2 ( 1 ( m 2 1 ) )   ) 1 2 .

4. Operational Matrix of the NSW

The present section is built to derive an operational matrix of derivatives for the NSW. Based on the NSW vector Q ( x ) mentioned in (1), it can be determined the operational matrix of integer derivative as below.
The following theorem is needed hereafter.
Theorem 3.
Let Q ( x )  be the NSW vector defined in (1). Then, the first derivative of the vector Q ( x )  can be expressed as
d Q ( x ) d x = D Q Q ( x )
where D Q is 2 k 1 ( M + 1 ) square operation matrix of differentiation and is defined by
D Q = ( D O O O D O O O D )
In which D is a square matrix and their elements can be explicitly obtained as below
D i , j = 2 k { i i   o d d ,     j = 0 , 2 i i > j ,     i j = o d d , 0 o t h e r w i s e .
Proof. 
By using NSW, the i t h  element of vector Q n , m ( x )  can be rewritten in the following way
Q r ( x ) = Q n , m ( x ) = 2 k 1 2 π M s m ( 2 k x 2 n + 1 ) ,
For   n 1 2 k 1 x n 2 k 1  and Q r ( x ) = 0  outside the interval x [ n 1 2 k 1 , n 2 k 1 ] ,  where   r = n ( m + 1 ) + ( m + 1 ) , m = 0 ,   1 ,   ,   M ,     n = 0 ,   1 ,   2 ,   , ( 2 k 1 ) .
or   Q n , m ( x ) = 2 k 1 2 π ( M s m ( 2 k x 2 n + 1 ) ) χ [ n 1 2 k 1 , n 2 k 1 ]
where χ [ n 1 2 k 1 , n 2 k 1 ] = { 1 x [ n 1 2 k 1 , n 2 k 1 ] 0 otherwise
Differentiate Eq. 11 with respect to   x , yields:
d Q ( x ) d x = 2 k 1 2 π [ M ˙ s m ( 2 k x 2 n + 1 ) ] ,   for   x [ n 1 2 k 1 , n 2 k 1 ]
Hence the SSW expansion only has those elements in Q n , m ( x )  that are non-zero in the interval [ n 1 2 k 1 , n 2 k 1 ] , that is:
Q r ( x ) ,     r = n ( M + 1 ) , n ( M + 1 ) + 2 , , n ( M + 1 ) + ( M + 1 ) .
This enables us to expand ( d Q n m ( x ) d x )  in terms of the NSW in the form:
d Q ( x ) d x = r = n ( M + 1 ) + 1 ( n + 1 ) ( M + 1 ) a r Q r ( x )
This implies that the operational matrix D Q n , m ( x )  is a block matrix as defined in Eq. 9 since d Q ( x ) d x = 0 .
Then we have d Q ( x ) d x = 0  for  i = 1 ( M + 1 ) + 1 ,   2 ( M + 1 ) + 1 , , ( 2 k 1 ) ( M + 1 ) + 1 ,
As a results, the elements of the first row of the matrix D given in Eq. 10 are zeros.
Now, substitute d M ˙ s m ( x ) d x  back into Eq. 13, gives
d Q n , m ( x ) d x = 1 π 2.   2 k 1 n { i = 1 n 1 M s n 2 i + 1 ( x ) + 1 2 M s 0       i f   n   o d d , i = 1 n 1 M s n 2 i + 1 ( x )                                 i f   n   e v e n .
Expanding Eq. 15 in terms of SSW basis, to get
d Q n , m ( x ) d x = 2.2 k n { i = 1 n 1 Q n ( M + 1 ) + i ( x ) + 1 2 Q 0 i f   n   o d d , i = 1 n 1 Q n ( M + 1 ) + i ( x ) i f   n   e v e n .
Choosing D ( i , j ) , such that
D i , j = 2 k { i i   o d d , j = 0 , 2 i i > j , i j = o d d , 0 o t h e r w i s e .
The equation d Q n , m d x = D Q n , m ( x ) is hold.

5. The NSW Algorithm for Solving Optimal Control Problem

In this section, the task of optimizing systems governed by ordinary differential equations which leads to the optimal control problems is investigated. They are arising in many applications in astronautics and aeronautics.
Consider the following process on fixed interval [ 0 , 1 ] :
J = 0 1 F ( t , u ( t ) , x ( t ) ) d t ,
Subject to
u ( t ) = f ( t , x ( t ) , x ˙ ( t ) )
Together with the conditions
x ( 0 ) = x 0 ,     x ( 1 ) = x 1
where:   x ( · ) : [ 0 , 1 ]     is   the   state   variable , u ( · ) : [ 0 , 1 ] ,   is   the   control   variable and the function f is assumed to be real valued continuously differentiable.
First, we assume the solution of the state variables x ( t ) and x ˙ ( t ) in terms of NSW respectively as below
x ( t ) = i = 0 m a i Q i ( t )
x ˙ ( t ) = i = 0 m a i D Q i ( t )
where a = [ a 0 , a 1 , , a m ] T ,   is   unknown parameters vector.
The second step is to obtain the approximation for the control variable by substituting Eq. 19 and Eq. 20 into Eq. 17
u ( t ) = f ( t ,   i = 0 m a i Q i ( t ) , i = 0 m a i D Q i ( t ) )
Finally, the performance index value J is obtained as a function of the unknown a 0 ,   a 1 ,   a 2 ,   ,   a m as below
J = 0 1 F ( ( i = 0 m a i Q i ( t ) ) 2 , ( i = 0 m a i D Q i ( t ) ) 2 ) d t
The resulting quadratic mathematical programming problem can be simplified as below:
J = 1 2 a T a
where = 2 0 1 F ( ( Q ( t ) ) 2 , ( D Q ( t ) ) 2 ) d t ,
subject to F a b = 0
where = [ Q T ( 0 ) Q T ( 1 ) ] , b = [ x 0 x 1 ]
Using Lagrange multiplier technique to obtain the optimal values of the unknown parameters a * ,
a * = 1 T ( 1 T ) 1 b .

6. Test Examples

In this section, the results for the numerical simulation of optimal control problems formulated based on the proposed new shifted wavelet method are presented. Different test cases for m defined in the interval   [ 0 ,   1 ] are considered with a single state function and a single control function. Note that the proposed method can be solved problems with multiple controls. The test problems are considered continuous optimal controls, and the analytic solution is known in order to allow the validation of the proposed algorithm, by comparing its result with the exact solution.
Example 1.
In the following example, we have one state function x ( t ) , and one control function u ( t ) . This problem is concerned with minimization of
min   J = 0 1 ( u 2 ( t ) + x 2 ( t ) ) d t
Subject to u ( t ) = x ˙ ( t )
with initial conditions x ( 0 ) = 0 ,   x ( 1 ) = 0.5
The   exact   value   of   the   performance   index   is   J = 0.328258821379.
Table 1 shows the values of the coefficients, Table 2 and Table 3 give the values of the state and the control respectively.
Table 4 gives the absolute errors that NSW method might produce with the compression to the following methods:
  • Chebyshev method proposed in [24].
  • The method existing in [25].
Example 2.
Consider the second test problem
m i n   J = 0 1 ( u 2 ( t ) + 3 x 2 ( t ) ) d t u ( t ) = x ˙ ( t ) x ( t ) x ( 0 ) = 1 ,   x ( 1 ) = 0.51314538
The exact solution of (21) is u ( t ) = 3 e 4 3 e 4 + 1 e 2 t 3 3 e 4 + 1 e 2 t , x ( t ) = 3 e 4 3 e 4 + 1 e 2 t + 1 3 e 4 + 1 e 2 t and J = 2.791659975.
Table 5 shows the values of the coefficients, Table 6 and Table 7 give the values of the state and the control respectively, whereas Table 8 lists the absolute errors that our method NSW might produce compares our technique to the method presented in [28]. From these tables, it can be seen that the state and the control variables are accurately approximated by the proposed method.
Table 8 illustrates the fast convergence rate of the proposed method since the errors decay rapidly by increasing the number of the NSW.
Example 3.
Consider the third test problem
J = 1 2 0 1 ( u 2 ( t ) + x 2 ( 1 ) ) d t
u ( t ) = x ˙ ( t ) x ( t )
x ( 0 ) = 1 ,   x ( 1 ) = 0.3678794412 . J e x a c t = 1.
Table 10 and Table 11 compare the exact solutions and the approximate solutions of x ( t ) and   u ( t ) ) respectively for m = 3 , 4 , 5 . The absolute errors of J for various values of M are listed in Table 12. From these results, it is worthwhile to note that the approximate solutions obtain by the proposed method completely coincide with the exact solutions.
Example 4.
Consider the fourth test problem
min   J = 0 1 ( 0.5 u 2 ( t ) + x 2 ( t ) ) d t
u ( t ) = x ˙ ( t ) 0.5 x ( t )
x ( 0 ) = 1 ,   x ( 1 ) = 0.5018480732
The exact solution of (22) is: u ( t ) = 2 e 3 t e 3 a , x ( t ) = 2 e 3 t + e 3 a , where   a = 2 e 3 t 2 ( 1 + e 3 ) and J = 0.8641644978.
Table 4 compares absolute errors of presented method wavelets and to existing method presented in the article [25] with different values of m , see that the absolute errors of the presented method good result compare to existing other method and indicating a decrease in absolute errors with increase in the value of m .
It is clear that the approximate solution of the performance index when m = 8 is in very good agreement with the corresponding exact solution. Table 13 reports the absolute errors of J obtained by the proposed method at m = 3 ,   4 ,   5 in comparison to the method in [25] at m = 2 ,   3 ,   4 . The obtained results show that the approximate solutions are more accurate for the proposed method than the method in [25]. In addition, the fast convergence rate of the proposed method is also illustrated from the absolute errors results since by increasing the number of the NSW, the errors decay rapidly.

7. Conclusions

The proposed new shifted wavelet functions method has been successfully applied in studying the approximate solution of OCP in combination with their differentiation operational matrix. The proposed algorithm converges well. A mathematical technique has been established for solving quadratic optimal control problem which is based on the NSW functions with the direct technique. Moreover, by applying both the convergence analysis and error analysis of the presented new shifted wavelets is worked out and it is illustrated to converge uniformly on it. The obtained NSW based approximate solutions have been compared with existing methods of solutions as well as the analytical solutions. The error analysis in the obtained solutions gives the consistency and competence of the suggested method.

References

  1. Suman, S.; Kumar, A.; Singh, G.K. A new closed form method for design of variable bandwidth linear phase FIR filter using Bernstein multiwavelets. International Journal of Electronics 2015, 102, 635–650. [Google Scholar] [CrossRef]
  2. Lutfy, O.F. A wavelet functional link neural network controller trained by a modified sine cosine algorithm using the feedback error learning strategy. Journal of Engineering Science and Technology 2020, 15, 709–727. [Google Scholar]
  3. Keshavarz, E.; Ordokhani, Y.; Razzaghi, M. The Taylor wavelets method for solving the initial and boundary value problems of Bratu-type equations. Appl. Numer. Math. 2018, 128, 205–216. [Google Scholar] [CrossRef]
  4. Akram, K.; Asadollah, M.; Sohrab, E. Solving Optimal Control Problem Using Hermite Wavelet, Numerical Algebra. Control and Optimization 2019, 9. [Google Scholar]
  5. Diveev, A.; Sofronova, E.; Konstantinov, S. Approaches to Numerical Solution of Optimal Control Problem Using Evolutionary Computations. Applied Sciences 2021, 11, 7096. [Google Scholar] [CrossRef]
  6. Zhaohua, G.; Chongyang, L.; Kok, L.; Song, W.; Yonghong, W. Numerical solution of free final time fractional optimal control problems. Applied Mathematics and Computation 2021, 405. [Google Scholar]
  7. Hans, G.; Christian, K.; Andreas, M.; Andreas, P. Numerical solution of optimal control problems with explicit and implicit switches. Optimization Methods and Software 2018, 33, 450–474. [Google Scholar]
  8. Wang, Z.; Li, Y. An Indirect Method for Inequality Constrained Optimal Control Problems. IFAC Papers On Line 2017, 50, 4070–4075. [Google Scholar] [CrossRef]
  9. Yang, C.; Fabien, B. An adaptive mesh refinement method for indirectly solving optimal control problems. Numer Algor 2022. [Google Scholar] [CrossRef]
  10. Mohammad, A. A modified pseudospectral method for indirect solving a class of switching optimal control problems. 2022, 234, 1531–1542.
  11. Mohammad, H. A new direct method based on the Chebyshev cardinal functions for variable-order fractional optimal control problems. Journal of the Franklin Institute 2018, 355, 12–4970. [Google Scholar]
  12. Mohamed, A.; Mohand, B.; Nacima, M.; Philippe, M. Direct method to solve linear-quadratic optimal control problems. AIMS 2021, 645–663. [Google Scholar]
  13. Askhat, D.; Elena, S.; Sergey, K. Approaches to Numerical Solution of Optimal Control Problem Using Evolutionary Computations. Appl. Sci. 2021, 11, 7096. [Google Scholar]
  14. Mirvakili, M.; Allahviranloo, T.; Soltanian, F. A numerical method for approximating the solution of fuzzy fractional optimal control problems in Caputo sense using Legendre functions. Journal of Intelligent and Fuzzy System 2022, 43, 3827–3858. [Google Scholar] [CrossRef]
  15. Viorel, M.; Iulian, A. Optimal Control Systems Using Evolutionary Algorithm-Control Input Range Estimation. Automation 2022, 3, 95–115. [Google Scholar]
  16. Khamis NSelamat, H.; Ismail, F.S.; Lutfy, O.F. Optimal exit configuration of factory layout for a safer emergency evacuation using crowd simulation model and multi-objective artificial bee colony optimization. International Journal of Integrated Engineering 2019, 11, 183–191. [Google Scholar]
  17. Behzad, K.; Delavarkhalafi, A.; Karbassi, M.; Boubaker, K. A Numerical Approach for Solving Optimal Control Problems Using the Boubaker Polynomials Expansion Scheme. Journal of Interpolation and Approximation in Scientific Computing 2014, 1–18. [Google Scholar]
  18. Ayat, O.; Mirkamal, M. Solving optimal control problems by using Hermite polynomials. Computational Methods for Differential Equations 2020, 8, 2–314. [Google Scholar]
  19. Abed, M.S.; Lutfy, O.F.; Al-Doori, Q.F. Online Path Planning of Mobile Robots Based on African Vultures Optimization Algorithm in Unknown Environments. Journal Europeen des Systemes Automatises 2022, 55, 405–412. [Google Scholar] [CrossRef]
  20. Sayevand, K.; Zarvan, Z.; Nikan, O. On Approximate Solution of Optimal Control Problems by Parabolic Equations. Int. J. Appl. Comput. Math 2022, 8, 248. [Google Scholar] [CrossRef]
  21. Rabiei, K.; Ordokhani, Y. A new operational matrix based on Boubaker wavelet for solving optimal control problems of arbitrary order. Transactions of the Institute of Measurement and Control 2020, 42, 858–1870. [Google Scholar] [CrossRef]
  22. Afari, H.; Nemati, S.; Ganji, R.M. Operational matrices based on the shifted fifth-kind Chebyshev polynomials for solving nonlinear variable order integro-differential equations. Adv Differ Equ. 2021, 1. [Google Scholar]
  23. Vellappandi, M.; Govindaraj, V. Operator theoretic approach to optimal control problems characterized by the Caputo fractional differential equations. Results in Control and Optimization 2023, 10. [Google Scholar] [CrossRef]
  24. Kafash, B.; Delavarkhalafi, A. Restarted State Parameterization Method for Optimal Control Problems. J. Math. Computer Sci. 2015, 14, 151–161. [Google Scholar] [CrossRef]
  25. Kafash, B.; Delavarkhalafi MKarbass, S. Application of Chebyshev polynomials to derive efficient algorithms for the solution of optimal control problems. Scientia Iranica 2012, 19, 795–805. [Google Scholar] [CrossRef]
Table 1. The unknown coefficients of Example 1.
Table 1. The unknown coefficients of Example 1.
a i   m = 3   m = 4   m = 5  
a 0 0.2305457113 0.23672730347 0.2561993228
a 1 0.1233668163 0.13001387707 0.1605089078
a 2 0.0062942253 0.01782237852 0.04711219512
  a 3 0.00386099897 0.02681474048
a 4 0.00578906657
Table 2. Approximate and exact values of x ( t ) for Example 1.
Table 2. Approximate and exact values of x ( t ) for Example 1.
  t   m = 3     m = 4     m = 5     x e x a c t  
0.2 0.08181818 0.085725158 0.08566326 0.08566022
0.4 0.17272727 0.174680761 0.17476776 0.17475830
0.6 0.27272727 0.270773784 0.27086078 0.27087003
0.8 0.38181818 0.377911205 0.37784931 0.37785270
1 0.5000000 0.499999999 0.49999999 0.50000000
Table 3. Approximate and exact values of u ( t ) for Example 1.
Table 3. Approximate and exact values of u ( t ) for Example 1.
  t   m = 3     m = 4     m = 5     u e x a c t  
0.2 0.431818181 0.4334460887 0.4341131879 0.4339966471
0.4 0.477272727 0.4593657505 0.4598878493 0.45995203956
0.6 0.522727272 0.5048202959 0.5042981972 0.50436692229
0.8 0.568181811 0.5698097251 0.5691426259 0.56902382057
1 0.613636363 0.6543340380 0.6562195299 0.65651764274
Table 4. A comparison of the results of Example 1.
Table 4. A comparison of the results of Example 1.
Absolute Errors
t Presented Method Method in [29] Method in [28]
0.2 0.431818181 0.4334460887 0.4341131879
0.4 0.477272727 0.4593657505 0.4598878493
0.6 0.522727272 0.5048202959 0.5042981972
0.8 0.568181811 0.5698097251 0.5691426259
1 0.613636363 0.6543340380 0.6562195299
Table 5. The unknown coefficients of Example 2.
Table 5. The unknown coefficients of Example 2.
a i   m = 3   m = 4   m = 5  
a 0 0.3555075068713 0.346802467917 0.3533419782
a 1 0.0118653476259 0.003658158193 0.01146671901
a 2 0.0598656329410 0.047554848792 0.05385050095
a 3 0.00410359471 6.7914878 e 04
a 4 0.00119568588
Table 6. Approximate and exact values of x ( t ) for Example 2.
Table 6. Approximate and exact values of x ( t ) for Example 2.
t m = 3 m = 4 m = 5 x e x a c t
0.2 0.72969817542 0.71547355348 0.7130374834 0.7131081208
0.4 0.54586180114 0.53874949017 0.5417269091 0.5418429752
0.6 0.44849087714 0.45560318811 0.4585806070 0.4584348199
0.8 0.43758540342 0.45181002536 0.4493739552 0.4493594610
1 0.5131453800 0.5131453800 0.51314537999 0.51314537665
Table 7. Approximate and exact values of u ( t ) for Example 2.
Table 7. Approximate and exact values of u ( t ) for Example 2.
t m = 3 m = 5 m = 5 u e x a c t
0.2 1.8650436725 1.8567459764 1.8302875486 1.82851756831
0.4 1.2488800468 1.1765715519 1.160488978237 1.16185967374
0.6 0.7191818714 0.66109799850 0.68313541022 0.68359121816
0.8 0.27594914628 0.2961006940 0.31768698166 0.31616348542
1   0.08081812857 0.0673550166 0.00313312217 0.0031331221
Table 8. Estimated values of J for m = 3 , 4 , 5 for Example 2.
Table 8. Estimated values of J for m = 3 , 4 , 5 for Example 2.
m J   in   Present   Method Absolute Errors J Absolute Errors
3 2.7971823353 5.5 e 03 2.7977436 6.0e-03
4 2.79237308337 7.1 e 04 2.79608386 4.4e-03
5 2.79166202469 2.0 e 06 2.79608386 4.4e-03
Table 9. The unknown coefficients of Example 3.
Table 9. The unknown coefficients of Example 3.
a i m = 3 m = 4 m = 5
a 0 0.355507506871 0.3468024679179 0.353341978261
a 1 1865347625915 0.0036581581933 0.011466719011
a 2 0.059865632941 0.047554848792 0.053850500952
a 3 0.0041035947162 6.79148785 e 04
a 4 6.79148785 e 04
Table 10. Approximate and exact values of x ( t ) for Example 3.
Table 10. Approximate and exact values of x ( t ) for Example 3.
t m = 3 m = 4 m = 5 x e x a c t
0.2 0.8238348176 0.8188954570 0.8187261332 0.8187307530
0.4 0.6725401705 0.6700704903 0.6703085019 0.6703200460
0.6 0.5461160588 0.54858573916 0.5488237508 0.5488116360
0.8 0.4445624824 0.4495018430 0.4493325192 0.4493289641
1 0.3678794412 0.3678794412 0.3678794412 0.3678794412
Table 11. Approximate and exact values of u ( t ) for Example 3.
Table 11. Approximate and exact values of u ( t ) for Example 3.
t m = 3 m = 4 m = 5 u e x a c t
0.2 1.6424843911 1.6396030974 1.6376087511 1.63746150615
0.4 1.36683706763 1.34172865101 1.34053832634 1.340640092071
0.6 1.11606027940 1.09589122343 1.09755757148 1.097623272188
0.8 0.89015402646 0.8971514540 0.89880715280 0.898657928234
1 0.68911830881 0.74056998220 0.73541173113 0.735758882342
Table 13. Estimated values of J for m = 3 , 4 , 5 of Example 4.
Table 13. Estimated values of J for m = 3 , 4 , 5 of Example 4.
m J   in   Present   Method Absolute
Errors
  N J
3 0.8647288093 5.6 e 04 2 0.8645390446
4 0.8642180723 5.3 e 05 3 0.8644550472
5 0.8641645689 7.1 e 08 4 0.8643546452
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated