Preprint
Article

Robust Tracking as Constrained Optimization by Uncertain Dynamic Plant: Mirror Descent and Average Sub-Gradient Methods -- Version of Integral Sliding Mode Control

Altmetrics

Downloads

113

Views

24

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

19 July 2023

Posted:

25 July 2023

You are already at the latest version

Alerts
Abstract
A class of controlled plants, whose dynamics is governed by a vector system of ordinary differential equations with a partially known right-hand side, is considered. The state variables and their velocities are assumed to be measurable. The aim is to design a controller which minimizes a loss function under certain constraints which arguments is the current state of the controlled plant. The designed control action is admitted to be a function of the current sub-gradient only, which supposed to be measurable on-line. The control design is based on ASG (Average Sub-Gradient method) — version of Integral Sliding Mode (ISM) concept, aimed to minimize on average a given convex (not obligatory strongly convex) cost function of the current state under a set of given constraints. An optimization type algorithm is developed and analyzed using ideas of SDM technique. The main results consist in proving the reachability of the "desired regime" (nonstationary analogue of sliding surface) from the beginning of the process and obtaining an explicit upper bound for the averaged loss function decrement, that is, the averaged in time functional convergence is proven and the rate of such convergence is estimated.
Keywords: 
Subject: Computer Science and Mathematics  -   Applied Mathematics

1. Introduction

1.1. Brief survey

Constrained optimization is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The objective function is either a cost function or energy function, which is to be minimized, or a reward function or utility function, which is to be maximized. Constraints can be either hard constraints, which set conditions for the variables that are required to be satisfied, or soft constraints, which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied (see, for example [3,4,13,17,18] and [22]).
All control strategies in the most publications, treated as Static Optimization Methods (SOM), in continuous-time may be represented in the following form
F ( x t ) t F * : = min x X a d m R n F ( x ) ,
where F : R n R is a convex (not obligatory strongly convex) mapping, X a d m is the admissible convex set of arguments and the process x t is generated by the simple ordinary differential equation (ODE)
x ˙ t = u t , x 0 is fixed , t 0 ,
with any initial conditions x 0 R n . The relation (2) is referred hereafter to as a static plant. All known procedures of SOM differ only in designing of control action u t (or an optimization algorithm) as a function of the current state x t (Markov’s strategy) or more profound available history, namely, u t = u ( t , x τ τ 0 , t ) .
Here we will consider more general, and hence, more complex situation when the process x t is generated by the dynamic plant
x ¨ t = f t , x t , x ˙ t + u t , x 0 , x ˙ 0 are fixed , t 0 , x t , u t R n ,
where the vector function f in the right-hand side is supposed to be unknown but belonging to some class C of nonlinearities. This problem is more closed to the, so-called, Extremum Seeking Problem [1,12,14,23], where the nonlinear dynamics includes the first order derivatives only. So, in [24], several optimization schemes are considered and there is shown that under appropriate conditions these schemes achieve extremum point from an arbitrarily large domain of initial conditions if the parameters in the controller are appropriately adjusted. This approach was applied in [15] for two levels plant’s economic optimization. Many advanced process control systems use some form of model predictive control approach [5,26]. The paper [20] describes a new algorithm for extremum seeking using stochastic on-line gradient estimation. The paper [7] deals with the problem of constrained optimization in dynamic linear time-invariant (LTI) systems characterized by a control vector dimension less than that of the system state vector. The finite-time convergence to a vicinity of order ε of the optimal equilibrium point is proved. In [8] a variable structure convex programming based control for a class of linear uncertain systems with accessible state is presented.
In this paper we consider a class of controlled plants with dynamics governed by a vector system of the second order ordinary differential equations (ODE) with unknown right-hand side. All mechanical Lagrange models belong to this class. The state variables and their velocities are assumed to be measurable. We design a controller minimizing a loss function subjected to a set of constraints to the state of the controlled plant. The designed control action is admitted to be a function of the current sub-gradients of loss function and constraints only, which also supposed to be measurable on-line. The control is designed based on SDM (Subgradient Descent Method) - version [19,21] of Integral Sliding Mode (ISM) concept [9,25] aimed to minimize "on average" a given convex (not obligatory strongly convex) cost function of the current state under a set of given constraints. An optimization type algorithm is developed and analyzed using ideas of SDM technique [3]. We prove the reachability of the "desired regime" (nonstationary analogue of sliding surface) [9] from the beginning of the process and obtaining an explicit upper bound for thecost function decrement, that is, theconvergence is proven and the rate of convergence is estimated as O ( t 1 ) . This paper generalizes the approach, suggested in [11] for unconstrained dynamic optimization, to the constraint optimization problem realized by an uncertain second order dynamic plant.

1.2. Main contributions

  • Robust Tracking problem is reformulated as a Constrained Optimization realized by a dynamic plant with unknown (but bounded) right-hand side.
  • The cost as well as the constraints are admitted to be convex but not obligatory strictly or strongly convex.
  • Mirror Descent Method (MDM) and ASG – Version of Sliding Mode Control are suggested and realized.
  • The convergence of the obtained trajectories of controlled uncertain plant to the corresponding admissible zone closed the minimal point is realized.

2. Uncertain plant description and admitted dynamic zone

2.1. Dynamic model

The second order dynamic model (3) can be represented in the following extended format
x ˙ 1 , t x ˙ 2 , t = x 2 , t f t , x 1 , t , x 2 , t + 0 n × n I n × n u t , x 1 , t 0 = x ˚ 1 R n , x 2 , t 0 = x ˚ 2 R n , u t R n .
Here the extended state variables x 1 , t = x t , x 2 , t = x ˙ t are the current coordinates and their velocities at time t 0 . Function f t , x 1 , t , x 2 , t is partially continuous in all arguments and admits to be unknown but bounded as
f t , x 1 , x 2 k x x 1 , x 2 : = c 0 + c 1 x 1 + c 2 x 2
with final positive constants c 0 , c 1 , and c 2 . Hereafter the symbol · means the Euclidean norm.

2.2. Reference trajectory, tracking error dynamics, and admissible zone

The aim of the controller (which will be exactly formulated below) is to realize the tracking of the state x t for the given reference trajectory x t * t 0 . Define the tracking error  δ 1 , t as
δ 1 , t : = x 1 , t x 1 , t * , δ 2 , t = δ ˙ 1 , t = x 2 , t x 2 , t * ,
where x 1 , t * is the continuously differentiable trajectory to be tracked satisfying
x ˙ 1 , t * = x 2 , t * = φ t , x 1 , t * , t 0 , x 1 , 0 * is known .
In view of that, the error tracking dynamics can be represented as follows
δ ˙ 1 , t δ ˙ 2 , t = δ 2 , t f δ t , δ 1 , t , δ 2 , t + 0 n × n I n × n u t , f δ t , δ 1 , t , δ 2 , t : = f t , δ 1 , t + x 1 , t * , δ 2 , t + x 2 , t * x ˙ 2 , t * .
Let us require that the dynamics of δ 1 , t should be realized after time t 0 0 within a bounded admissible zone D a d m .
Let the loss function F : R n R be a convex. For example, the following two functions belong to the considered class of the convex loss functions to be optimized:
1 ) F δ 1 = i = 1 n δ 1 , i , 2 ) F δ 1 = i = 1 n δ 1 , i ε + , z ε + : = z ε if z ε z ε if z ε 0 if z < ε .

2.3. Basic assumptions

  • A1The current states x t , x ˙ t of the plant (4) are supposed to be measurable (available) on-line for all t 0 .
  • A2 The function f t , x t , x ˙ t , satisfying (5), is piecewise continuous in all arguments and admits to be unknown.
  • A3The current state x t * , x ˙ t * of the reference trajectory are also supposed to be available on-line for any t 0 .
  • A4Here we assume that sub-gradient 1 of the loss function F ( δ 1 , t ) is available on-line for a current time t 0 ,and the set of minimizers δ 1 * of F ( · ) on the set D a d m includes the origin δ 1 * = 0 , that is,
    0 A r g min δ 1 D a d m F ( δ 1 ) .
  • A5The admissible set D a d m is non empty convex compact, i.e., D a d m .

3. Desired dynamics

3.1. Mirror descent method in continuous time

Let us apply mirror descent approach, using the Legendre-Fenchel transformation [16] as follows. For any ζ R n define
U * ζ = max z D a d m ζ z U z , U z = 1 2 z 2 ,
so that (see, for instance, [2,10])
U * ζ = arg max δ 1 D a d m ζ δ 1 U δ 1 .
Define the dynamics for the vector-function ζ t R n as
ζ ˙ t = a ( δ 1 , t ) , a ( δ 1 , t ) F ( δ 1 , t ) , ζ t 0 = 0 , t + θ δ ˙ 1 , t + δ 1 , t = U * ζ t η , t t 0 0 , η R n .
Remark 1.
The second differential equation in (12) can be inegrated as follows
t + θ δ 1 , t t 0 + θ δ 1 , t 0 = τ = t 0 t U * ζ τ η d τ , δ 1 , t = λ t δ 1 , t 0 + 1 λ t 1 t t 0 τ = t 0 t U * ζ τ η d τ D a d m , λ t : = t 0 + θ t + θ .
Therefore, δ 1 , t D a d m for all t t 0 because of convexity and due to (10)–(11).

3.2. Why the dynamics δ 1 , t be desired

The following theorem explains why the dynamics δ 1 , t may be considered as a desired one.
Theorem 1.
Under Assumptions A1-A5 on the trajectories δ 1 , t , generated by (12), for all t t 0 0 the following propertry holds
F ( δ 1 , t ) F ( δ 1 * η ) + t 0 + θ t + θ F ( δ 1 , t 0 ) F ( δ 1 * η ) ,
where
δ 1 * η = arg min δ 1 D a d m η δ 1 + U δ 1 .
Proof. 
Defining μ t : = t + θ , δ 1 * : = δ 1 * η , we have from (12)
d d t U * ζ t η ζ t η δ 1 * = ζ ˙ t U * ζ t η δ 1 * = a ( δ 1 , t ) μ t δ ˙ 1 , t + δ 1 , t δ 1 * = a ( δ 1 , t ) δ 1 , t δ 1 * μ t a ( δ 1 , t ) δ ˙ 1 , t .
Due to the convexity property for F ( δ 1 ) , we have
a ( δ 1 , t ) ( δ 1 , t ) δ 1 , t δ 1 * F ( δ 1 , t ) F ( δ 1 * ) ,
and, in view of the relation
a ( δ 1 , t ) δ ˙ 1 , t = d d t F ( δ 1 , t ) ,
it follows
d d t U * ζ t η ζ t η δ 1 * = ζ ˙ t U * ζ t η δ 1 * = a ( δ 1 , t ) μ t δ ˙ 1 , t + δ 1 , t δ 1 * F ( δ 1 , t ) F ( δ 1 * ) μ t a ( δ 1 , t ) δ ˙ 1 , t ,
or equivalently,
d d t U * ζ δ , t η δ ζ δ , t η δ δ 1 * F ( δ 1 , t ) F ( δ 1 * ) μ t d d t F ( δ 1 , t ) .
After integration we get
τ = t 0 t F ( δ 1 , τ ) F ( δ 1 * ) d τ U * ζ τ η ζ τ η δ 1 * τ = t 0 τ = t τ = t 0 t μ τ d d τ F ( δ 1 , τ ) F ( δ 1 * ) d τ = U * ζ t η ζ t η δ 1 * + U * η + η δ 1 * μ τ F ( δ 1 , τ ) F ( δ 1 * ) τ = t 0 τ = t + τ = t 0 t F ( δ 1 , τ ) F ( δ 1 * ) d τ ,
which implies
μ t F ( δ 1 , t ) F ( δ 1 * ) U * ζ t η ζ t η δ 1 * + U * η + η δ 1 * + μ t 0 F ( δ 1 , t 0 ) F ( δ 1 * ) .
Using (10), we get
U * ζ t η ζ t η δ 1 * U δ 1 * ,
U * ζ t η ζ t η δ 1 * U δ 1 * = 1 2 δ 1 * 2 ,
and
μ t F ( δ 1 , t ) F ( δ 1 * ) 1 2 δ 1 * 2 + U * η + η δ 1 * + μ t 0 F ( δ 1 , t 0 ) F ( δ 1 * ) .
Since by (10) and (11)
U * η = arg max δ 1 D a d m η δ 1 U δ 1 , U δ 1 = 1 2 δ 1 2 ,
and defining
δ 1 * η : = arg max δ 1 D a d m η δ 1 U δ 1 = U * η ,
we get
U * η + η δ 1 * = U δ 1 * = 1 2 δ 1 * 2 .
Therefore, we get
μ t F ( δ 1 , t ) F ( δ 1 * ) 1 2 δ 1 * 2 1 2 δ 1 * 2 + μ t 0 F ( δ 1 , t 0 ) F ( δ 1 * ) .
F ( δ 1 , t ) F ( δ 1 * η ) + μ t 0 μ t F ( δ 1 , t 0 ) F ( δ 1 * η ) .
Example 1.
Assume that
D a d m : = δ 1 R n : δ 1 r .
To calculate δ 1 * , according (14), it is sufficient to note that the soltion of the problem
2 η δ 1 + δ 1 2 = δ 1 + η 2 η 2 min δ 1 r ,
is
δ 1 * η = η if η r η η r if η > r .

4. Robust controller design

4.1. Auxilary sliding variable and its dynamics

Introduce a new auxilary variable (sliding variable)
s t = t + θ δ 2 , t + δ 1 , t U * ζ t η , t t 0 0 .
Notice that the function s t is measurable on-line, and that the situation when
s t = 0 for all t t 0
corresponds exactly the desired regime (12), starting from the moment t 0 . Then for V s t = 1 2 s t 2 in view of (8) and the first equation in (12) we have
d d t V s t = s t s ˙ t = s t 2 δ ˙ 1 , t + t + θ δ ˙ 2 , t d d t U * ζ t η = s t 2 δ 2 , t + t + θ f t , δ 1 , t + x 1 , t * , δ 2 , t + x 2 , t * x ˙ 2 , t * + u t 2 U * ζ t η ζ ˙ t = t + θ s t f t , δ 1 , t + x 1 , t * , δ 2 , t + x 2 , t * + t + θ s t 2 t + θ δ 2 , t x ˙ 2 , t * + u t + 1 t + θ 2 U * ζ t η a ( δ 1 , t ) k t Sign s t t + θ s t f t , δ 1 , t + x 1 , t * , δ 2 , t + x 2 , t * t + θ k t s t Sign s t t + θ s t c 0 + c 1 δ 1 , t + x 1 , t * + c 2 δ 2 , t + x 2 , t * k x , t : = k x ( δ 1 , t + x 1 , t * , δ 2 , t + x 2 , t * ) k t s t Sign s t .
Here
Sign s t = sign s 1 , t , . . . , sign s n , t , sign s i , t = + 1 if s i , t > 0 = 1 if s i , t < 0 1 , + 1 if s i , t = 0 .

4.2. Robust control structure

Since
s t Sign s t = i = 1 n s i , t s t
and taking
k t = k x , t + ρ , ρ > 0 ,
we get
d d t V s t t + θ s t k x , t k t = t + θ ρ 2 V s t ,
which implies
d V s t V s t t + θ 2 ρ d t , 2 V s t V s t 0 2 2 ρ t + θ 2 t 0 + θ 2 , 0 V s t V s t 0 2 4 ρ t + θ 2 t 0 + θ 2 .
This means that for all t t r e a c h , where
t r e a c h : = t : V s t 0 2 4 ρ t + θ 2 t 0 + θ 2 = 0 = 2 ρ s t 0 + t 0 + θ 2 θ .
Finally, the robust control is
u t = 2 t + θ δ 2 , t + x ˙ 2 , t * 1 t + θ 2 U * ζ t η a ( δ 1 , t ) k t Sign s t = u c o m p , t + u d i s c , t ,
where
u c o m p , t : = 2 t + θ δ 2 , t + x ˙ 2 , t * 1 t + θ 2 U * ζ t η a ( δ 1 , t ) , u d i s c , t : = k t Sign s t .
Remark 1.
If we wish to get t r e a c h = t 0 = 0 , we need to complete the identity
s 0 = θ δ 2 , 0 + δ 1 , 0 U * η = ( ) θ δ 2 , 0 + δ 1 , 0 δ 1 * η = 0 .
Since δ 1 * η D a d m , we may conclude that parameters θ > 0 , η and initial conditions δ 1 , 0 , δ 2 , 0 should be consistent in the sence that
θ δ 2 , 0 + δ 1 , 0 D a d m .
Remark 2.
For the example, for Eucidean r-ball in R n , being the admissible set D a d m , from (10)–(11) one has
U * ζ = arg max δ 1 D a d m ζ δ 1 U δ 1 = ζ if ζ r r ζ ζ if ζ > r ,
δ 1 * η = arg min δ 1 D a d m η δ 1 + U δ 1 = arg min δ 1 D a d m η δ 1 + 1 2 δ 1 2 = η if η r .
From (19) it follows
θ δ 2 , 0 + δ 1 , 0 = η , η r ,
and
2 U * ζ = I n × n if ζ r r ζ I n × n ζ ζ T ζ 2 if ζ > r .
Notice, that U * -function (11) is nondifferential in the points of r-sphere of ball, and it is continuous differential in all other points of R n . The formulas in (21), (24) are presented as their continuous versions on ball U * -function (11) including the r-sphere.

4.3. Main result

We are ready to formulate the main result.
Theorem 1.
Under Assumptions A1-A5 the robust control (18)-(19) with parameter η , satisfying (20), provides the property
F ( δ 1 , t ) F ( δ 1 * η ) + θ t + θ F ( δ 1 , 0 ) F ( δ 1 * η )
for all t 0 and any regularizing parameter θ > 0 .
Proof. 
Since in view of the relation (20) of the parameter η and initial conditions δ 1 , 0 , δ ˙ 1 , 0 the auxiliary variable s t = 0 for all t 0 starting from the beginning of the control process. Using the formula (13) for t 0 = 0 we obtain (25). □

5. Discussion

Equations (15), (20) hold under θ > 0 , η R n at the following cases:
  • Zero initial conditions δ 1 , 0 = 0 , δ 2 , 0 = 0 . Thus, η = 0 for arbitrary θ > 0 (see, as an example, the 1st item in loss function (9)).
  • Non-zero initial conditions δ 1 , 0 , δ 2 , 0 are collinear oppositely directed vectors. Therefore, θ > 0 and η = 0 exist (see, as an example, the 1st item in loss function (9)).
  • Equation (23) holds under non-zero vector η with a sufficiently small η ϵ and for θ > 0 (see, as an example, the 2nd item in loss function (9)).

6. Conclusion

- The constrained optimization problem is addressed in this study using a second-order differential controlled plant with an unknown (but bounded) right side of the model.
- The desired dynamics in the tracking error variables is designed based on Mirror Descent Method.
- The continuous-time convergence to the set of minimizing points is established, and the associated rate of convergence has been analytically evaluated.
- The robust controller, containing both the continuous (compensating) u c o m p and the discontinuous u d i s c , is proposed the ASG-version of Integral Sliding Mode approach.
- The suggested controller, under the special realations of it parameters with the initial conditions, is proved to provide the desired regime from the beginning of the control process.
- This method may has several applications in the development of robust control in mechanical systems, including soft robotics and moving dynamic plants.

Author Contributions

Conceptualization, A.V. and A.P.; methodology, A.V. and A.P.; formal analysis, A.V. and A.P.; writing—original draft preparation, A.V. and A.P.; writing—review and editing, A.V.; supervision, A.P.; project administration, A.P.; funding acquisition, A.V. All authors have read and agreed to the published version of the manuscript.

Funding

A.V. is entitled to a 100 percent discount for publication in this special issue.

Conflicts of Interest

The authors declare no conflict of interest. Declare conflicts of interest or state “The authors declare no conflict of interest.” Authors must identify and declare any personal circumstances or interest that may be perceived as inappropriately influencing the representation or interpretation of reported research results. Any role of the funders in the design of the study; in the collection, analyses or interpretation of data; in the writing of the manuscript; or in the decision to publish the results must be declared in this section. If there is no role, please state “The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results”.

Abbreviations

The following abbreviations are used in this manuscript:
ASG Average Sub-Gradient
SDM Subgradient Descent Method
ISM Integral Sliding Mode
SOM Static Optimization Methods
ODE Ordinary Differential Equation

References

  1. Ariyur Kartik, B. and Krstic, M., Real-time optimization by extremum-seeking control. John Wiley & Sons, 2003.
  2. Ben-Tal, A. and Nemirovski, A., The Conjugate Barrier Mirror Descent Method for Non-Smooth Convex Optimization, Preprint of the Faculty of Industr. Eng. Manag., Technion – Israel Inst. Technol., Haifa, 1999.
  3. Bertsekas, D. P., Constrained Optimization and Lagrange Multiplier Methods, New York: Academic Press. ISBN 0-12-093480-9, 1982.
  4. Dechter, R., Constraint Processing, Morgan Kaufmann. ISBN 1-55860-890-7, 2003.
  5. Dehaan, D. and Guay, M., "Extremum-seeking control of state-constrained nonlinear systems", Automatica, 2005, 41(9), 1567-1574.
  6. Feijer, D. and Paganini. F., "Stability of primal–dual gradient dynamics and applications to network optimization", Automatica, 2010, 46(12), 1974–1981.
  7. Ferrara, A. and Utkin, V.I., "Sliding Mode Optimization in Dynamic LTI Systems",Journal of Optimization Theory and Applications, 2002, 115(3), 727–740.
  8. Ferrara, A., A variable structure convex programming based control approach for a class of uncertain linear systems, Systems & Control Letters, 2005, 54(6), 529-538.
  9. Fridman, L., Poznyak, A. and Bejarano, F. J., Robust Output LQ Optimal Control via Integral Sliding Modes. Birkhäuser, Springer Science and Business Media, New York, 2014.
  10. Juditsky, A. B., Nazin, A. V., Tsybakov, A. B., Vayatis, N. Recursive aggregation of estimators by the mirror descent algorithm with averaging. // Probl. Inf. Transm., 2005, 41(4), 368–384; translation from Probl. Peredachi Inf., 2005, No. 4, 78–96.
  11. Poznyak, A.S., Nazin A.V., and Alazki H. Integral Sliding Mode Convex Optimization in Uncertain Lagrangian Systems Driven by PMDC Motors: Averaged Subgradient Approach // IEEE Transactions on Automatic Control. 2021. Vol. 66, No. 9. P. 4267-4273 (1-8).
  12. Krstic, M. and Wang, H. H., "Stability of extremum seeking feedback for general nonlinear dynamic systems", Automatica, 2000, 36(4), 595-601.
  13. Prosser, M., Constrained Optimization by Substitution. Basic Mathematics for Economists, New York: Routledge. pp. 338–346. ISBN 0-415-08424-5, 1993.
  14. Rastrigin, L.A. Systems of extremal control // Nauka, Moscow, 1974 (in Russian).
  15. Rawlings, J. B., and Amrit, R., Optimizing process economic performance using model predictive control, in Nonlinear Model Predictive Control, Springer, Berlin, Heidelberg, 2009, 119-138.
  16. Rockafellar, R.T. Convex analysis, Princeton University Press, Princeton, 1970.
  17. Rossi, F., van Beek, P. and Walsh, T. (eds.), Chapter 1 - Introduction, Foundations of Artificial Intelligence, Handbook of Constraint Programming, Elsevier, 2, pp. 3–12, doi:10.1016/s1574-6526(06)80005-2. [CrossRef]
  18. Leader, J. J., Numerical Analysis and Scientific Computation, Addison Wesley. ISBN 0-201-73499-0. 2004.
  19. Nazin, A.V., "Algorithms of Inertial Mirror Descent in Convex Problems of Stochastic Optimization", Automation and Remote Control, January 2018, 79(1), 78–88.
  20. Solis, C.U., Clempner, J.B. and Poznyak, A.S., Extremum seeking by a dynamic plant using mixed integral sliding mode controller with synchronous detection gradient estimation, International journal of Robust and Nonlinear Control, 2018, 29(3), 702-714.
  21. Simpson-Porco, J. W., Input/output analysis of primal-dual gradient algorithms. In Communication, Control, and Computing (Allerton), 2016, 54th Annual Allerton Conference, IEEE, 2016, 219-224.
  22. Sun, Wenyu and Yua, Ya-Xiang, Optimization Theory and Methods: Nonlinear Programming, Springer, ISBN 978-1441937650, 2010.
  23. Tan, Y., Moase, W. H., Manzie, C., Nešić, D., & Mareels, I. M. Y. (2010, July). Extremum seeking from 1922 to 2010. In Control Conference (CCC), IEEE, 29th Chinese,2010, 14-26.
  24. Tan, Y., Nešić, D. and Mareels, I., "On non-local stability properties of extremum seeking control", Automatica,2006, 42(6), 889-903.
  25. Utkin, V., Sliding Modes in Control Optimization, Springer Verlag, Berlin, 1992.
  26. Chunlei, Z. and Ordóñez, R., "Robust and adaptive design of numerical optimization-based extremum seeking control." Automatica 45.3 (2009): 634-646.
1
Recall that a vector a ( x ) R n , satisfying the inequality F ( x + y ) F ( x ) + a ( x ) y for all y R n , is called the sub-gradient of the function F ( x ) at the point x R n and is denoted by a ( x ) F ( x ) which is the set of all sub-gradients of F at the point x. If F ( x ) is differentiable at a point x, then a ( x ) = F ( x ) . In the minimal point x * we have 0 F ( x * ) .
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated