Preprint
Article

RBF Neural Networks-Based Near-Optimal Tracking Control of Partially Unknown Discrete-Time Nonlinear Systems

Altmetrics

Downloads

142

Views

57

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

29 February 2024

Posted:

01 March 2024

You are already at the latest version

Alerts
Abstract
This paper proposed an optimal tracking control scheme through adaptive dynamic programming(ADP) for a class of partially unknown discrete-time nonlinear systems based on radial basis function neural network(RBF-NN). In order to acquire the unknown system dynamics, we use two RBF-NNs, the one is used to construct the identifier, and the another is used to directly approximate the steady-state control input, where a novel adaptive law is proposed to update neural network weights. While the optimal feedback control and the cost function are derived via feedforward neural networks approximating, it is proposed to regulate the tracking error, the critic network and the actor network are then trained online to obtain the solution of the associated Hamilton–Jacobi–Bellman (HJB) equation being built under the ADP framework. Simulations verify the effectiveness of the optimal tracking control technique using the neural networks.
Keywords: 
Subject: Computer Science and Mathematics  -   Mathematics

1. Introduction

As is widely known, nonlinear system control is an important topic of control fields, especially for uncertainly unknown nonlinear systems, which is difficult for traditional control methods. Until 1988, radial basis function neural networks were proposed [1]. Immediately following in 1900, Narendra, K. S. and K. Parthasarathy first proposed an artificial neural network adaptive control method for nonlinear dynamical systems [2]. Since then, multilayer neural networks (MNN) and radial basis function (RBF) neural networks were successfully applied in pattern recognition and control systems [3]. Compared to multilayer feedforward networks (MFNs), the RBF neural networks attracted much attention due to their good generalization ability, simple network structure, and avoidance of unnecessary and lengthy computations. Studies on RBF-NNs have shown the ability of neural networks to approximate any nonlinear function with a compact set and arbitrary accuracy[4,5]. Many research results have been published on neural network control for nonlinear systems [6,7].
On the other hand, optimal tracking control as one of the effective methods for nonlinear systems in optimal control, received many practical engineering applications [8,9,10]. Therefore, exploring the optimal tracking optimal control for nonlinear systems possesses significant theoretical importance and practical value. For optimal control methods for nonlinear systems, the difficulty lies in the requirement of solving the nonlinear Hamilton-Jacobi-Belman (HJB) equation, which is usually difficult to solve analytically. Although dynamic programming is an effective method for solving optimal control problems, there is the problem of "curse of dimensionality" when facing relatively complex systems [11,12].
Faced with the difficult problem of solving nonlinear Hamilton-Jacobi-Bellman partial differential equations exactly, several methods was proposed to approximate the solutions of the Hamilton-Jacobi-Bellman equations, These include the use of reinforcement learning [8,13,14,15,16,17,18,19] and back-propagation through time [20]. Among these classical RL methods, combining the advantages of adaptive control and optimal control, the ADP algorithm was considered as one of the core methods for realizing optimal control strategies for the diversity of optimal control problems, and it has been successfully applied to both continuous-time systems [21,22,23] and discrete-time systems [24,25,26,27,28] to search for solutions of the HJB equations online. Numerous ADP and RL approaches emerged, such as robust ADP [29,30] iterative/invariant ADP [31,32,33], spiking/Hamiltionian-driven ADP [34,35], integral RL [36,37], and off-policy RL [38,39,40]. Several works have attempted to solve the discrete time nonlinear optimal regulation problem in a near optimal sense using adaptive dynamic programming through neural networks (NNs) with offline training.
In the past decades, many relevant studies was conducted on the optimal tracking control of discrete-time nonlinear system, such as generalised policy iteration adaptive dynamic programming[41], actor-critic algorithm [42], heuristic dynamic programming (HDP)[43], greedy heuristic dynamic programming iteration algorithm[44] and Q-Learning Algorithm[45]. However, in the known literatures, optimal tracking control methods using RBF-NNs applied to the ADP algorithm are barely used.
In this paper, an optimal tracking control method RBF-NNs-based for discrete-time partially unknown nonlinear systems is proposed, two RBF neural networks are used to approximate the unknown system dynamic as well as the steady-state control, and after transforming the tracking problem into a regulation problem, two feedforward neural networks are used to approximate the critic network and the actor network to obtain the error feedback control, which allows the online learning process to require only current and past system data rather than the exact system dynamics.
The contributions of article are as follows: (1) Unlike classical technique of NNs approximating [42,44,45,46], we propose an near-optimal tracking control scheme for a class of partially unknown discrete-time nonlinear systems based on RBF-NNs and the stability of systems is proved by the Lyapunov theory. (2) Compared with [41,44], we additionally used an RBF-NN to directly approximate the the steady-state controller of the unknown system, it can solve the requirement for the priori knowledge of the controlled system dynamics and reference system dynamics. (3) For the inverse dynamic NN to directly approximate the steady-state controller of the system, we propose an novel adaptive law to update the weight of the RBF-NN, and the convergence is completed through the selection of constants.
The organization of this paper is as follows. The problem statement is shown in Section II. The technique of the system with partially unknown nonlinear dynamic are designed in Section III, where include the RBF-NN identifier, the RBF-NN steady-state controller, near optimal feedback controller and stability analysis. Simulations and experimental results are provided in Sections IV to validate the proposed control method. Section V draws some conclusions.

2. Problem Statement

In this paper, we consider the following discrete-time nonlinear system:
x ( k + 1 ) = f [ x ( k ) ] + g [ x ( k ) ] u ( k )
where x ( k ) R n is the measurable system state and u ( k ) R m is the control input. Assume that the nonlinear smooth function f [ x ( k ) ] R n is an unknown drift function, g [ x ( k ) ] R n × m is a known function and g [ x ( k ) ] F g 1 where the Frobenius norm · F is applied. In addition, assuming that there exists a matrix g [ x ( k ) ] + R m × n such that g [ x ( k ) ] g [ x ( k ) ] + = I R n × n where I is the identity matrix. Let x ( 0 ) be the initial state.
The reference trajectory is generated by the following bounded command:
x d ( k + 1 ) = φ ( x d ( k ) )
where x d ( k ) R n and φ ( x d ( k ) ) R n , and x d ( k ) is the reference trajectory, which needs only to be stable in the sense of Lyapunov, not necessarily asymptotically stable.
Let u ( k ) be an arbitrary sequence of controls from k to infinity. The goal of this paper is to design a controller u ( k ) that not only ensures the state of system (1) tracks the reference trajectory, but also minimizes the cost function
J ( e ( k ) , u ( k ) ) = k = 0 e T ( k ) Q e ( k ) + u T ( k ) R u ( k )
where Q R n × n and R R m × m are symmetric positive definite; e ( k ) = x ( k ) x d ( k ) is tracking error. For common solutions of tracking problems [47], the control input consists of two parts, a steady-state input u d and a feedback input u e . Next, we will discuss how obtain each part.
The steady-state of the control input is used to ensure perfect tracking. This perfect tracking equation is realized under the condition x ( k ) = x d ( k ) . For this condition to be fulfilled, the steady-state part of the control u d ( k ) must exist to make x ( k ) equivalent x d ( k ) . By substituting x d ( k ) and u d ( k ) into system (1), the reference state is
x d ( k + 1 ) = f [ x d ( k ) ] + g [ x d ( k ) ] u d ( k )
If the system dynamics (1) are known, u d ( k ) is acquired by
u d ( k ) = g [ x d ( k ) ] + ( x d ( k + 1 ) f [ x d ( k ) ] )
where g [ x d ( k ) ] + = ( g [ x d ( k ) ] T g [ x d ( k ) ] ) 1 g [ x d ( k ) ] T is the generalized inverse of g [ x d ( k ) ] with g [ x d ( k ) ] + g [ x d ( k ) ] = I .
By using (1) and (4), the tracking error dynamics e ( k ) are given by
e ( k + 1 ) = f [ x ( k ) ] + g [ x ( k ) ] u ( k ) x d k + 1 = f e ( k ) + g e ( k ) u e ( k )
where f e ( k ) = g ( e ( k ) + x d ( k ) ) g ( x d ( k ) ) + ( φ ( x d ( k ) ) f ( x d ( k ) ) ) + f ( e ( k ) + x d ( k ) ) φ ( x d ( k ) ) , u e ( k ) = u ( k ) u d ( k ) and g e ( k ) = g [ x d ( k ) + e ( k ) ] . u e ( k ) R m is the feedback control input. By minimizing the cost function, it is designed to stabilize the tracking error dynamics. For e ( k ) under the control sequence, the cost function is defined as
J e ( e ( k ) , u e ( k ) ) = k = 0 e T ( k ) Q e ( k ) + u e T ( k ) R u e ( k ) = e T ( k ) Q e ( k ) + u e T ( k ) R u e ( k ) + J e ( e ( k + 1 ) , u e ( k + 1 ) ) = r ( k ) + J e ( e ( k + 1 ) , u e ( k + 1 ) )
where r ( k ) = e T ( k ) Q e ( k ) + u e T ( k ) R u e ( k ) , and J e ( e ( k ) , u e ( k ) ) > 0 for e ( k ) , u e ( k ) 0 . Q R n × n and R R m × m are symmetric positive definite, x d ( k ) and x d ( k + 1 ) are bounded to be tracked by the reference trajectory. The tracking error e ( k ) is used in this study of the cost function of the optimal tracking control problem. This feedback control u e ( k ) is found by minimizing (7) to solve the extremum condition in the optimal control framework[8]. This result is
u e * ( k ) = 1 2 R 1 g e ( k ) J ( e ( k + 1 ) ) e ( k + 1 )
Then the standard control input is obtained
u * ( k ) = u d ( k ) + u e * ( k )
where u d ( k ) is obtained from (5), u e * ( k ) is obtained from (8).
Remark 1. 
In oreder to acquire the unknown dynamics information in system (1), we used an RBF neural network to reconstruct system dynamics. Therefore, we can use (5) to obtain the steady-state control u d ( k ) .
The main results of this paper are based on the following definitions and assumptions.
Assumption A1. 
System (1) is controllable, and the system state x ( k ) = 0 is in equilibrium under control u ( k ) = 0 . Input control u ( k ) = u ( x ( k ) ) satisfies u ( x ( k ) ) = 0 for x ( k ) = 0 , and cost function is a positive definite function for any x ( k ) and u ( k ) .
Definition 1. 
A control law u e is admissible with respect to (7) on the set Ω , if u e is continuous on a compact set Ω u R for e ( k ) Ω , u e ( 0 ) = 0 , and J ( e ( 0 ) , u e ( · ) ) is finite.
Lemma 1. 
For the tracking error system (6), assume that u e ( k ) be an admissible control and the internal dynamics f e ( k ) is bounded, and
f e ( k ) 2 Γ λ min ( Q ) e ( k ) 2 / 2 + ( Γ λ min ( R ) 2 g 1 2 ) u e ( k ) 2 / 2 ,
where λ min ( R ) is the minimum eigenvalue of R, λ min ( Q ) is the minimum eigenvalue of Q, and Γ > 2 g 1 2 / λ min ( R ) is a known positive constant. Then, the tracking error system (6) be asymptotically stable.
Proof. 
Considering the following positive definite Lyapunov function,
V ( k ) = e T ( k ) e ( k ) + Γ J e ( k )
where J e ( k ) = J e ( e ( k ) , u e ( k ) ) is defined in (7). Differencing the Lyapunov function yields
Δ V ( k ) = e T ( k + 1 ) e ( k + 1 ) e T ( k ) e ( k ) + Γ ( J e ( k + 1 ) J e ( k ) )
Using (6) and (7), we can obtain
Δ V ( k ) = ( f e ( k ) + g e ( k ) u e ( k ) ) T ( f e ( k ) + g e ( k ) u e ( k ) ) e T ( k ) e ( k ) Γ ( e T ( k ) Q e ( k ) + u e T ( k ) R u e ( k ) )
Applying the Cauchy–Schwarz inequality yields
Δ V ( k ) 2 f e ( k ) 2 ( Γ λ min ( R ) 2 g 1 2 ) u e ( k ) 2 Γ λ min ( Q ) e ( k ) 2 e ( k ) 2
Considering the goal of the tracking error system (6) being asymptotically stable, i.e., Δ V ( k ) < 0 , we require
2 f e ( k ) 2 Γ λ min ( Q ) e ( k ) 2 + ( Γ λ min ( R ) 2 g 1 2 ) u e ( k ) 2
Therefore, if the bound in (10) is satisfied, we can get Δ V ( k ) < 0 and the asymptotic stability of the tracking error system (6) is proved. □
Remark 2. 
Lemma 1 shows that under the condition that the internal dynamics f e ( k ) is bounded is bounded to satisfy (10), then, for the nonlinear system (6), there exists an admissible control u e ( k ) not only stabilizes the system (6) on Ω but also guarantees that J e ( k ) is finite.

3. Optimal Tracking Controller Design with Partially Unknown Dynamic

In this section, firstly, we use an RBF-NN to approximate the unknown system dynamics f [ x ( k ) ] , and use another RBF-NN to approximate the steady-state controller u d ( k ) . Secondly, two feedback neural networks are introduced to approximate the cost function and the optimal feedback control u e ( k ) . Finally, the system stability is proved by selecting an appropriate Lyapunov function.

3.1. RBF-NN Identifier Design

In this subsection, in order to capture the unknown dynamics of the system (1), an RBF-NN-based identifier is proposed. Without losses of generality, this unknown dynamics is assumed to be a smooth function within a compact set. Then this unknown dynamics (1) can be approximated by the RBF-NN as
f ^ ( x ( k ) ) = w f ^ ( k ) T h [ x ( k ) ] + f ( x )
where w f ^ ( k ) is the matrix of ideal output weights of the neural network and h [ x ( k ) ] is the vector of radial basis functions, f ( x ) is the bounded approximation error, | | Δ f ( x ) | | < ε f , where ε f is a positive constant.
For any non-zero approximation error f ( x ) , there exists optimal weight matrix w f * such that
f ( x ( k ) ) = f ^ ( x , w f * ) f ( x )
where w f * is the optimal weight of identifier, and f ^ ( x , w f * ) = w f * ( k ) T h [ x ( k ) ] . The output weights are updated and the hidden weights remain unchanged when training, so the neural network model identification error is
f ˜ ( x ( k ) ) = f [ x ( k ) ] f ^ [ x ( k ) ] = f ^ ( x , w f * ) f [ x ( k ) ] w f ^ ( k ) T h [ x ( k ) ] = w f ˜ ( k ) T h [ x ( k ) ] f [ x ( k ) ]
where w f ˜ ( k ) = w f * ( k ) w f ^ ( k ) .
The weights are adjusted to minimize the following error
E ( k + 1 ) = 1 2 [ f ˜ ( x ( k ) ) ] T [ f ˜ ( x ( k ) ) ]
Using gradient descent method, the weights are updated by
Δ w f j k + 1 = η E w f j = η ( f ( x ( k ) ) f ^ ( x ( k ) ) ) h [ x ( k ) ] = η ( f ˜ ( x ( k ) ) ) h [ x ( k ) ]
and
w f j k = w f j k 1 + Δ w f j k
where η > 0 is the learning rate of the identifier.
Assumption A2. 
The error of the neural network approximation is assumed to have an upper bound, namely
f ( x ) T f ( x ) w f ˜ ( k ) T w f ˜ ( k ) h [ x ( k ) ] T h [ x ( k )

3.2. RBF-NN Steady-State Controller Design

We use the RBF-NN to approximate the steady-state control u d ( k ) directly, the inverse dynamic NN is established to approximate[48,49].
We design the steady-state control u d ( k ) through the approximation of the RBF-NN
u d ( k ) = w d ^ T k h [ x d ( k ) ]
where w d ^ is the actual neural network weights; h [ x d ( k ) ] is the output of the hidden layers; u d ( k ) is the output of the RBF-NN.
Let the ideal steady-state control u d * ( k ) be
u d * ( k ) = w d * T h [ x d ( k ) ] + ε u
where w d * is the optimal neural network weights and ε u is the error vector. Assuming that x d ( k + 1 ) is the desired output of the system at the point k + 1 , without considering external disturbances, the control input u d * ( k ) satisfies
L [ x d ( k ) , u d * ( k ) ] x d ( k + 1 ) = 0
where L [ x d ( k ) , u d * ( k ) ] = f [ x d ( k ) ] + g [ x d ( k ) ] u d * ( k ) .
Thus, we can define the error e m ( k ) of the approximating state as
e m ( k + 1 ) = L [ x d ( k ) , u d ( k ) ] x d ( k + 1 )
where L [ x d ( k ) , u d ( k ) ] = f [ x d ( k ) ] + g [ x d ( k ) ] u d ( k ) .
(24) subtracted from (23) yields
u d ( k ) u d * ( k ) = w d ^ ( k ) h [ x d ( k ) ] w d * ( k ) h [ x d ( k ) ] ε u = w d ˜ T k h [ x d ( k ) ] ε u
where w d ˜ ( k ) = w d ^ ( k ) w d * ( k ) is weight approximation error.
The weights are updated by the following update law of the weights
w d ^ ( k + 1 ) = w d ^ ( k ) γ [ h ( z ) e m ( k + 1 ) + σ w d ^ ( k ) ]
where γ > 0 and σ > 0 are positive constant .
Assumption A3. 
Within the set Ω ε , the ideal neural network weights w * and the approximation error are bounded
w d * w m , | | ε u | | ε l

3.3. Near Optimal Feedback Controller Design

In this subsection, we present an adaptive dynamic programming algorithm (ADP) based on the Bellman optimality. The objective is to find the feedback control policy that minimizes the approximated cost function.
First, the initial cost function V 0 ( e ( k ) ) = 0 which is not necessarily the optimal value function, and then a single control vector u e 0 ( k ) = 0 can be solved by
V 0 ( e ( k ) ) = arg min u e ( k ) { e T ( k ) Q e ( k ) + u e T ( k ) R u e ( k ) + V 0 ( e ( k + 1 ) ) }
After that, we update the control law,
u e 1 ( k ) = arg min u e ( k ) { e T ( k ) Q e ( k ) + u e T ( k ) R u e ( k ) + V 0 ( e ( k + 1 ) ) } = e T ( k ) Q e ( k ) + ( u e 0 ( k ) ) T R u e 0 ( k )
hence, for i = 1 , 2 , . . . . ,the adaptive dynamic programming algorithm can be realized in a continuous iterative process in
V i ( e ( k ) ) = min u e ( k ) e T ( k ) Q e ( k ) + u e T ( k ) R u e ( k ) + V i ( e ( k + 1 ) ) = 1 2 R 1 g e T ( k ) V i ( e ( k + 1 ) ) e ( k + 1 )
and
u e i + 1 ( k ) = arg min u e ( k ) { e T ( k ) Q e ( k ) + u e T ( k ) R u e ( k ) + V i ( e ( k + 1 ) ) } = e T ( k ) Q e ( k ) + ( u e i ( k ) ) T R u e i ( k ) + V i ( e ( k + 1 ) )
where index i represents the number of iterations of the control law and the cost function, while index k represents time index of system state trajectory. Moreover, it is worth noting in the iterative process of adaptive dynamic programming that the number of iterations of the cost function and the control law increases from zero to infinity.
To begin the development of the feedback control policy, we use neural networks to construct the critic network and the actor network.
The critic network is used for approximating the cost function V i ( e ( k ) ) . The output of the critic network is denoted as
V ^ i ( e ( k ) ) = w c i T z ( ν c i T e ( k ) ) + ε c ( k )
where z ( ν c i T e ( k ) ) is the hidden layer function, w c i is the hidden layer weight of the critic network, ν c i is the input layer weight of the critic network, ε c ( k ) is the approximation error.
So we define the prediction error of the critic network as
e c i ( k ) = V ^ i ( e ( k ) ) V i ( e ( k ) )
The objective function to be minimized for the critic network is
E c i ( k ) = 1 2 e c i T ( k ) e c i ( k ) .
The weights of critic network are updated using the gradient descent method through
w c i ( k + 1 ) = w c i ( k ) α c [ E c i ( k ) w c i ( k ) ]
where α c > 0 is the learning rate of the critic network, and i is the update count of internal neuron to update the weight parameters.
The inputs of the actor network is the system error e ( k ) , and the outputs of the actor network is the optimal feedback control u e ( k ) . The output can be formulated as
u ^ e i ( k ) = w a i T z ( v a i T e ( k ) ) + ε a ( k ) ,
where z ( ν a i T e ( k ) ) is the hidden layer function, w a i is the hidden layer weight of the actor network, ν a i is the input layer weight of the actor network, ε a ( k ) is the approximation error.
Therefore, we define the prediction error of the action network as
e a i ( k ) = u ^ e i ( k ) u e i ( k )
where u ^ e i ( k ) is approximating optimal feedback control , u e i ( k ) is the optimal feedback control at the iterative number i.
The objective function to be minimized for the action network is
E a i ( k ) = 1 2 e a i T ( k ) e a i ( k )
The weights of the actor network are also updated in the same way as the critic network , we use the gradient descent method
w a i ( k + 1 ) = w a i ( k ) β a [ E a i ( k ) w a i ( k ) ] ,
where β a > 0 is the learning rate of the actor network and i is the update count of internal neuron to update the weight parameters.

3.4. Stability Analysis

In this subsection, the stability proof of the system is obtained by Lyapunov stability theory.
Assumption A4. 
Radial basis function h t = exp x t c t 2 2 b 2 of the maximum value is h m a x = 1 , where c ( t ) is the center point and b is the width of Radial basis function. Assuming the numbers of neurons is l [ l f , l d ] for any radial basis function h [ h [ x ( k ) ] , h [ x d ( k ) ] ] , then
h i 1 , h l l , h T h = h 2 l
we can know the maximum value h 2 of the hidden layer with l neurons is l [ l f , l d ] , then we assume the maximum value h [ x ( k ) ] 2 of the hidden layer for the identifier f ^ ( x ( k ) ) is l f , and the maximum value h d [ x ( k ) ] 2 of the hidden layer for the steady-state controller u d ( k ) is l d .
Lemma 2. 
The relationship between (25) and weight approximation error (27) satisfies the following equation.
w d ˜ T ( k ) h [ x d ( k ) ] = e m ( k + 1 ) L u + ε u
where e m ( k ) is the error of the approximating state x d ( k ) , L u = L u | u = ξ , ξ [ u d * ( k ) , u d ( k ) ] , g 1 L u > ϵ > 0 , g 1 and ϵ are positive constants.
Proof. 
Subtracting w d * from both sides of (28) , we get
w d ˜ ( k + 1 ) = w d ˜ ( k ) γ [ h [ x d ( k ) ] e m ( k + 1 ) + σ w d ^ ( k ) ]
Combining(25) and (27) with the mean value theorem, we can obtain
L [ x d ( k ) , u d ( k ) ] = L [ x d ( k ) ] , u d * ( k ) + w d ˜ T ( k ) h [ x d ( k ) ] ε u = L x d ( k ) , u d * ( k ) + w d ˜ T ( k ) h [ x d ( k ) ] ε u L u = x d ( k + 1 ) + w d ˜ T ( k ) h [ x d ( k ) ] ε u L u
Further combining (45) with (26), we can obtain
e m ( k + 1 ) = L [ x d ( k ) , u d ( k ) ] x d ( k + 1 ) = [ w d ˜ T ( k ) h [ x d ( k ) ] ε u ] L u
After sorting out, we can obtain
w d ˜ T ( k ) h [ x d ( k ) ] = e m ( k + 1 ) L u + ε u
the proof is completed. □
Lemma 3. 
For analysis simplicity, ε u and e m ( k + 1 ) have an inequality relation though using young’s inequality.
2 ε u e m ( k + 1 ) k 0 ε l 2 + 1 k 0 e m 2 ( k + 1 )
where k 0 is a positive constant.
From Figure 1, it can be seen that with e ( k ) , x d ( k ) and u e i ( k ) , the estimated error e ( k + 1 ) can be obtained with the aid of the RBF-NN identifier and the steady-state controller. Using the steady-state controller, we can obtain the reference trajectory x d ( k ) corresponding to the steady-state controller u d ( k ) . Using the ADP algorithm, we can obtain optimal feedback controller u e i ( k ) . Then, the actual controller u ( k ) = u e i ( k ) + u d ( k ) and system dynamic x ( k + 1 ) can be obtained. Furthermore, with x d ( k ) and x ( k ) we can get the estimated tracking error e ( k ) ,further obtained e ( k + 1 ) . Finally, we can reconstruct the system dynamic to track the reference trajectory.
Theorem 1. 
For the optimal tracking problem (1)-(3), the RBF-NN identifier (16) is used to approximate f ( x ( k ) ) , the steady-state controller u d ( k ) is approximated by the RBF-NN (23), and the feedforward networks (34),(38) is used to approximate the cost function J ( e ( k ) , u ( k ) ) and the feedback controller u e ( k ) , respectively. Assume that the parameters satisfy the following inequality,
( a ) 0 < η 1 l f ( b ) 0 < g 1 k 0 ( c ) 0 < ( 1 + σ ) l d γ 1 g 1 1 k 0 ( d ) 0 < ( l d + σ ) γ 1 ( e ) a c 2 / z ( v c i T e ( k ) ) 2 ( f ) β a 2 / z ( ν a i T e ( k ) ) 2
where η is the learning rate of the RBF-NN identifier, σ and γ are the update parameters of the steady-state controller approximating network weights, a c is the learning rate of the actor network, β a is the learning rate of the critic network, z ( v c i T e ( k ) ) and z ( ν a i T e ( k ) ) are hidden layer function of the actor network and the critic network. Then, the closed loop system (6) of approximating error is asymptotically stable when the parameter estimation errors are bounded.
Proof. 
Considering the following positive definite Lyapunov function candidate
J k = J 1 k + J 2 k + J 3 k + J 4 k = 1 η w f ˜ ( k ) T w f ˜ ( k ) + 1 g 1 e m 2 k + 1 γ w d ˜ k T w d ˜ ( k ) + w c i ( k ) T w c i ( k ) + w a i ( k ) T w a i ( k )
where J 1 k = 1 η w f ˜ ( k ) T w f ˜ ( k ) , J 2 k = 1 g 1 e m 2 k + 1 γ w d ˜ k T w d ˜ ( k ) , J 3 k = w c i ( k ) T w c i ( k ) , J 4 k = w a i ( k ) T w a i ( k ) .
Firstly, differencing it according to the Lyapunov function of J 1 k = 1 η w f ˜ ( k ) T w f ˜ ( k ) yields
Δ J 1 k = J 1 k + 1 J 1 k = 1 η w f ˜ ( k + 1 ) T w f ˜ ( k + 1 ) 1 η w f ˜ ( k ) T w f ˜ ( k ) = 1 η [ w f ˜ ( k ) + η f ˜ ( x ( k ) ) h [ x ( k ) ] ] T [ w f ˜ ( k ) + η f ˜ ( x ( k ) ) h [ x ( k ) ] ] 1 η w f ˜ ( k ) T w f ˜ ( k ) = 1 η [ w f ˜ ( k ) T w f ˜ ( k ) 1 η w f ˜ ( k ) T w f ˜ ( k ) + η 2 [ f ˜ ( x ( k ) ) T f ˜ ( x ( k ) ) h [ x ( k ) ] T h [ x ( k ) ] + 2 η f ˜ ( x ( k ) ) w f ˜ ( k ) T h [ x ( k ) ] = η [ [ w f ˜ ( k ) T h [ x ( k ) ] + f [ x ] ] T [ w f ˜ ( k ) T h [ x ( k ) ] + f [ x ] ] h [ x ( k ) ] T h [ x ( k ) ] ] + 2 [ w f ˜ ( k ) T h [ x ( k ) ] + f [ x ] ] w f ˜ ( k ) T h [ x ( k ) ] = η [ w f ˜ ( k ) T w f ˜ ( k ) h [ x ( k ) ] T h [ x ( k ) ] + 2 w f ˜ ( k ) T h [ x ( k ) ] f [ x ] 2 w f ˜ ( k ) T h [ x ( k ) ] 2 w f ˜ ( k ) T w f ˜ ( k ) h [ x ( k ) ] T h [ x ( k ) ] + f [ x ] T f [ x ] h [ x ( k ) ] T h [ x ( k ) ]
According to the Assumption 2, Assumption 4 and (42), (51) can be done to get
Δ J 1 k η l f 2 w f ˜ ( k ) 2 2 l f w f ˜ ( k ) 2 + η l f 2 w f ˜ ( k ) 2 + 2 η l f w f ˜ ( k ) T h [ x ( k ) ] f [ x ] 2 w f ˜ ( k ) T h [ x ( k ) ] f [ x ] w f ˜ ( k ) 2 ( 2 η l f 2 2 l f ) + ( 2 l f η 2 ) w f ˜ ( k ) T h [ x ( k ) ] f [ x ] w f ˜ ( k ) 2 ( 4 l f 2 η 4 l f )
After that, differencing it according to the Lyapunov function of J 2 k = 1 g 1 e m 2 k + 1 γ w d ˜ k T w d ˜ ( k ) yields
Δ J 2 k = J 2 k + 1 J 2 k = 1 g 1 e m 2 k + 1 e m 2 k 1 γ w d ˜ ( k ) T w d ˜ ( k ) + 1 γ w d ˜ ( k + 1 ) T w d ˜ ( k + 1 ) = 1 γ w d ¯ ( k ) γ [ h [ x d ( k ) ] e m ( k + 1 ) + σ w d ^ ( k ) ] T w d ˜ ( k ) γ [ h [ x d ( k ) ] e m ( k + 1 ) + σ w d ^ ( k ) ] 1 γ w d ˜ ( k ) T w d ˜ ( k ) + 1 g 1 [ e m 2 k + 1 e m 2 k ] = 1 g 1 e m 2 ( k + 1 ) e m 2 ( k ) 2 w d ˜ ( k ) T h [ x d ( k ) ] e m ( k + 1 ) + γ σ 2 w d ^ ( k ) T w d ^ ( k ) 2 σ w d ˜ ( k ) T w d ^ ( k ) + γ h T [ x d ( k ) ] h [ x d ( k ) ] e m 2 k + 1 + 2 γ σ w d ^ T ( k ) h [ x d ( k ) ] e m ( k + 1 )
where
2 σ w d ˜ ( k ) T w d ^ ( k ) = σ w d ˜ ( k ) [ w d ¯ ( k ) + w d * ] + σ [ w d ^ ( k ) ω d * ] T w d ^ ( k ) = σ w d ˜ ( k ) 2 + w d ^ ( k ) 2 + w d ˜ ( k ) T w d * w d * w d ^ ( k ) T = σ [ w d ˜ ( k ) 2 + w d ^ ( k ) 2 w d * 2 ] , γ h T [ x d ( k ) ] h [ x d ( k ) ] e m 2 k + 1 γ l d e m 2 k + 1 , 2 γ σ w d ^ T k h [ x d ( k ) ] e m ( k + 1 ) γ σ l d [ w d ^ ( k ) 2 + e m 2 k + 1 ] , γ σ 2 w d ^ T k w d ^ ( k ) = γ σ 2 w d ^ ( k ) 2
Substituting (54) into (53) yields
Δ J 2 k 1 g 1 e m 2 k + 1 e m 2 k 2 e m ( k + 1 ) L u + ε u e m ( k + 1 ) σ w d ˜ ( k ) 2 + γ σ 2 w d ^ ( k ) 2 + w d ^ ( k ) 2 w d * 2 + γ l d e m 2 ( k + 1 ) + γ σ l d [ w d ^ ( k ) 2 + e m 2 ( k + 1 ) ] = 1 g 1 2 L u + γ ( 1 + σ ) l d e m 2 k + 1 1 g 1 e m 2 k 2 ε u e m ( k + 1 ) σ w d ˜ ( k ) 2 + σ w d * 2 + σ ( 1 + γ l d + γ σ ) w d ^ ( k ) 2
Considering (26) and g 1 L u > ϵ > 0 , we can deduce
1 g 1 2 L u 1 g 1 2 g 1 = 1 g 1 < 0
With Lemma 2 and Lemma 3, we can further deduce
Δ J 2 k 1 g 1 + γ ( 1 + σ ) l d + 1 k 0 e m 2 k + 1 + σ ( γ l d + γ σ 1 ) w d ^ ( k ) 2 1 g 1 e m 2 k σ w d ¯ ( k ) 2 + σ ω m 2 + k 0 ε l 2 = 1 g 1 ( 1 + σ ) l d γ 1 k 0 e m 2 k + 1 + σ [ l d + σ γ 1 ] w d ^ ( k ) 2 1 g 1 e m 2 k β σ w d ˜ ( k ) 2
where β = g 1 ( σ w m 2 + k 0 ε l 2 ) is a positive constant.
Next, we consider the following Lyapunov function
J 3 ( k ) + J 4 ( k ) = w c i ( k ) T w c i ( k ) + w a i ( k ) T w a i ( k ) .
Then, differencing it according to the Lyapunov function of (58) yields
Δ J 3 ( k ) + Δ J 4 ( k ) = { w c i ( k + 1 ) T w c i ( k + 1 ) + w a i ( k + 1 ) T w a i ( k + 1 ) } { w c i ( k ) T w c i ( k ) + w a i ( k ) T w a i ( k ) } = a c e c i ( k ) 2 2 + a c z v c i T e ( k ) 2 + β a e a i ( k ) 2 ( 2 + β a z ( v a i T e ( k ) ) 2 ) .
Finally, Δ J k is derived from (52), (57) and (59)
Δ J k = Δ J 1 k + Δ J 2 k + Δ J 3 k + Δ J 4 k 4 w f ˜ ( k ) 2 ( l f 2 η l f ) σ w d ˜ ( k ) 2 1 g 1 ( 1 + σ ) l d γ 1 k 0 e m 2 k + 1 + σ [ l d + σ γ 1 ] w d ^ ( k ) 2 1 g 1 e m 2 k β + a c e c i ( k ) 2 2 + a c z v c i T e ( k ) 2 + β a e a i ( k ) 2 ( 2 + β a z ( v a i T e ( k ) ) 2 ) .
Based on the above analysis, when the parameters are selected to fulfill the following condition with e m 2 k β ,
0 < η 1 l f 0 < g 1 k 0 0 < ( 1 + σ ) l d γ 1 g 1 1 k 0 0 < ( l d + σ ) γ 1 a c 2 / z ( v c i T e ( k ) ) 2 β a 2 / z ( ν a i T e ( k ) ) 2
we can obtain Δ J k 0 . This proof is completed. □

4. Simulation

In this section, in order to demonstrate the effectiveness of the proposed tracking control method, a discrete-time nonlinear system is introduced.The case is derived from [47]. We assume that the nonlinear smooth function f R n is an unknown nonlinear drift function and g R n × m is a known function. The corresponding f [ x ( k ) ] and g ( k ) are given as
f [ x ( k ) ] = sin ( 0.5 x 2 ( k ) ) x 1 2 ( k ) cos ( 1.4 x 2 ( k ) ) sin ( 0.9 x 1 ( k ) ) g ( k ) = ( x 1 ( k ) ) 2 + 1.5 0.1 0 0.2 ( ( x 1 ( k ) + x 2 ( k ) ) 2 + 1 )
The reference trajectory x d ( k ) for the above system is defined as
x d k = 0.25 s i n ( 10 3 k ) 0.25 cos ( 10 3 k )
where t i m e ( s ) of y-axis is chosen as k(1,...,10000) multiplied by t s = 0.001 in the simulation.
The RBF networks have a three-layer structure with 2 input neurons, hidden layers have 9 neurons, and output layer have 2 neurons, the parameters c i and b i of the radial basis functions are chosen as c i (i=1,2,...,9) = 2 1.5 1.0 0.5 0 0.5 1.0 1.5 2 2 1.5 1.0 0.5 0 0.5 1.0 1.5 2 and b j = [ 2 , 2 ] , the initial weights w 0 were chosen to be random numbers between (0,1), where the inputs to the RBF-NN identifier are chosen to be x ( k ) and the inputs to the RBF-NN steady-state control u d are chosen to be x d ( k ) . Update of weights w d ^ , w f ^ is used in (21) and (28). Because g 1 L u = 1 , we can select g 1 = 5 . According to 0 < g 1 k 0 of Theorem 1, we can select k 0 = 10 . For the control parameters η , because hidden layers have 9 neurons, l = 9 , 0 < η 1 l 1 9 , we select η = 0.1 . While control parameters γ , σ , we can know 0 < ( 1 + σ ) 9 γ 1 5 1 10 = 1 10 = 0.10 and 0 < ( 9 + σ ) γ 1 from Theorem 1, so selecting γ = 0.01 , σ = 0.001 . The initial state is set as x ( 0 ) = 0 . We trained the RBF networks with 10,000 steps of acquired data , and Figure 2 and Figure 3 shows the RBF-NN identifier to approximate the tracking curves of the unknown dynamics f ˜ .
The performance index is select as Q = I and R = I that I is the identity matrix with appropriate dimension. For the actor network and critic network, we used the same parameter settings. The initial weights of the critic network and the actor network are chosen as random numbers between ( 10 , 10 ) . The input layer have 2 neurons, the hidden layer have 15 neurons, the output layer have 2 neurons, the learning rate is 0.1. The hidden layer uses the function t a n s i g and the function p u r e l i n , the output layer uses the function t r a i n l m .Though parameter settings, we train the actor network and the critic network with 5000 training steps to reach the given accuracy 1e-9. Figure 4 shows the curves of the system control u. In Figure 5 and Figure 6, we can see the curves of the state trajector x and the reference trajector x d .
Based on above the results, the simulation results show that this tracking technique obtains a relatively satisfactory tracking performance for partially unknown discrete-time nonlinear systems.

5. Conclusion

This paper proposed an optimal tracking control scheme through approximate dynamic programming for a class of partially unknown discrete-time nonlinear systems based on RBF-NNs . In dealing with unknown variables, two RBF-NNs are used to approximate the unknown function and the steady-state controller, respectively. Moreover, ADP algorithm are introduced to get the optimal feedback control for tracking the error dynamics, two feedforward neural networks are utilized as structures to approximate the cost function and feedback control inputs severally. Finally, simulation results show a relatively satisfactory tracking performance, which verify the effectiveness of the optimal tracking control technique. In future works, we will consider event-triggered control as well as completely unknown dynamics.

Author Contributions

All the authors contributed equally to the development of the research. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant No. 61463002, the Guizhou Province Natural Science Foundation of China under Grant No. Qiankehe Fundamentals-ZK[2021] General 322 and the Doctoral Foundation of Guangxi University of Science and Technology Grant No. Xiaokebo 22z04.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Acknowledgments

The authors thank to the Journal editors and the reviewers for their helpful suggestions and comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Broomhead, D.S.; Lowe, D. Radial basis functions, multi-variable functional interpolation and adaptive networks. 1988. [Google Scholar]
  2. Narendra, K.S.; Parthasarathy, K. Identification and control of dynamical systems using neural networks. IEEE Transactions on Neural Networks 1990, 1. [Google Scholar] [CrossRef] [PubMed]
  3. Narendra, K.S.; Mukhopadhyay, S. Adaptive control of nonlinear multivariable systems using neural networks. 1994; 737–752. [Google Scholar]
  4. Hartman, E.J.; Keeler, J.D.; Kowalski, J.M. Layered Neural Networks with Gaussian Hidden Units as Universal Approximations. Neural Computation 1990, 210–215. [Google Scholar] [CrossRef]
  5. Park, J. Universal approximation using radial basis function networks. Neural Comput. 1993. [Google Scholar] [CrossRef]
  6. Lewis, F.L.; Yesildirek, A.; Liu, K. Multilayer neural-net robot controller with guaranteed tracking performance. IEEE Transactions on Neural Networks 1996, 7. [Google Scholar] [CrossRef] [PubMed]
  7. Kobayashi, H.; Ozawa, R. Kobayashi, Hiroaki , and R. Ozawa . Adaptive neural network control of tendon-driven mechanisms with elastic tendons. Automatica 2003, 1509–1519. [Google Scholar] [CrossRef]
  8. Lewis, F.L.; et al. Optimal Control, 3rd ed.; John Wiley & Sons, Inc.: New Jersey, 2012. [Google Scholar]
  9. Mannava, A.; et al. Optimal tracking control of motion systems. IEEE Trans. Control Syst. Technol. 2012, 1548–1558. [Google Scholar] [CrossRef]
  10. Sharma, R.; Tewari, A. Optimal nonlinear tracking of spacecraft attitude maneuvers. IEEE Trans. Control Syst. Technol. 2013, 12, 677–682. [Google Scholar] [CrossRef]
  11. Bellman, R.E. Dynamic Programming. Princeton University Press: Princeton, NJ, 1957. [Google Scholar]
  12. Lewis, F.L.; Syrmos, V.L. Optimal Control; Wiley: New York, 1995. [Google Scholar]
  13. Powell, W.B. Approximate Dynamic Programming: Solving the Curses of Dimensionality; Wiley: New York, NY, USA, 2009. [Google Scholar]
  14. Bertsekas, D.P.; Tsitsiklis, J.N. Neuro-Dynamic Programming; Athena Scientifific: Belmont, MA, USA, 1996. [Google Scholar]
  15. Si, J.; Barto, A.G.; Powell, W.B.; Wunsch, D. Handbook of Learning and Approximate Dynamic Programming; Wiley: New York, NY, USA, 2004. [Google Scholar]
  16. Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction; MIT Press: Cambridge, MA, USA, 1998. [Google Scholar]
  17. Lewis, F.L.; Vrabie, D. Reinforcement learning and adaptive dynamic programming for feedback control. IEEE Circuits Syst. 2009, 8, 32–50. [Google Scholar] [CrossRef]
  18. Lewis, F.L.; Vrabie, D.; Vamvoudakis, K.G. Reinforcement learning and feedback control: Using natural decision methods to design optimal adaptive controllers. IEEE Control Syst. 2012, 11, 76–105. [Google Scholar] [CrossRef]
  19. Lewis, F.L.; Liu, D. Reinforcement Learning and Approximate Dynamic Programming for Feedback Control; Wiley: Hoboken, NJ, USA, 2013. [Google Scholar]
  20. Fairbank, M.; Li, S.; Fu, X.; Alonso, E.; Wunsch, D. An adaptive recurrent neural-network controller using a stabilization matrix and predictive inputs to solve a tracking problem under disturbances. Neural Netw. 2014, 1, 74–86. [Google Scholar] [CrossRef]
  21. Vrabie, D.; Lewis, F. Neural network approach to continuous-time direct adaptive optimal control for partially unknown nonlinear systems. Neural Netw. 2009, 4, 237–246. [Google Scholar] [CrossRef]
  22. Liu, D.; Yang, X.; Li, H. Adaptive optimal control for a class of continuous-time affifine nonlinear systems with unknown internal dynamics. Neural Comput. Appl. 2013, 11, 1843–1850. [Google Scholar] [CrossRef]
  23. Bhasin, S.; Kamalapurkar, R.; Johnson, M.; Vamvoudakis, K.G.; Lewis, F.L.; Dixon, W. E. A novel actor–critic–identififier architecture for approximate optimal control of uncertain nonlinear systems. Automatica 2013, 1, 82–92. [Google Scholar] [CrossRef]
  24. Al-Tamimi, A.; Lewis, F.L.; Abu-Khalaf, M. Discrete-time nonlinear HJB solution using approximate dynamic programming: Convergence proof. IEEE Trans. Syst. Man Cybern. B Cybern. 2008, 8, 943–949. [Google Scholar] [CrossRef]
  25. Prokhorov, D.V.; Wunsch, D.C. Adaptive critic designs. IEEE Trans. Neural Netw. 1997, 9, 997–1007. [Google Scholar] [CrossRef] [PubMed]
  26. Luo, Y.; Zhang, H. Approximate optimal control for a class of nonlinear discrete-time systems with saturating actuators. Prog. Natural Sci. 2008, 1023–1029. [Google Scholar] [CrossRef]
  27. Dierks, T.; Jagannathan, S. Online optimal control of nonlinear discrete-time systems using approximate dynamic programming. Control Theory Appl. 2011, 361–369. [Google Scholar] [CrossRef]
  28. Si, J.; Wang, Y.-T. Online learning control by association and reinforcement. IEEE Trans. Neural Netw. 2001, 5, 264–276. [Google Scholar] [CrossRef] [PubMed]
  29. Ren, L.; Zhang, G.; Mu, C. Data-based H control for the constrained-input nonlinear systems and its applications in chaotic circuit systems. IEEE Trans. Circuits Syst. 2020, 8, 2791–2802. [Google Scholar] [CrossRef]
  30. Zhao, F.; Gao, W.; Liu, T.; Jiang, Z.P. Event-triggered robust adaptive dynamic programming with output-feedback for large-scale systems. IEEE Trans. Control Netw. Syst. 2023, 8, 63–74. [Google Scholar] [CrossRef]
  31. Wei, Q.; Li, H.; Yang, X.; He, H. Continuous-time distributed policy iteration for multicontroller nonlinear systems. IEEE Trans. Cybern. 2021, 5, 2372–2383. [Google Scholar] [CrossRef] [PubMed]
  32. Zhang, Y.; Zhao, B.; Liu, D.; Zhang, S. Event-triggered control of discrete-time zero-sum games via deterministic policy gradient adaptive dynamic programming. IEEE Trans. Syst., Man, Cybern., Syst. 2022, 8, 4823–4835. [Google Scholar] [CrossRef]
  33. Zhu, Y.; Zhao, D.; He, H. Invariant adaptive dynamic programming for discrete-time optimal control. IEEE Trans. Syst., Man, Cybern., Syst. 2020, 11, 3959–3971. [Google Scholar] [CrossRef]
  34. Wei, Q.; Han, L.; Zhang, T. piking adaptive dynamic programming based on poisson process for discrete-time nonlinear systems. IEEE Trans. Neural Netw. Learn. Syst. 2022, 5, 1846–1856. [Google Scholar] [CrossRef]
  35. Yang, Y.; Wunsch, D.; Yin, Y. Hamiltonian-driven adaptive dynamic programming for continuous nonlinear dynamical systems. IEEE Trans. Neural Netw. Learn. Syst. 2017, 8, 1929–1940. [Google Scholar] [CrossRef]
  36. Li, M.; Qin, J.; Freris, N.M.; Ho, D.W. Multiplayer Stackelberg– Nash game for nonlinear system via value iteration-based integral reinforcement learning. IEEE Trans. Neural Netw. Learn. Syst. 2022, 4, 1429–1440. [Google Scholar] [CrossRef] [PubMed]
  37. Guo, X.; Yan, W.; Cui, R. Integral reinforcement learning-based adaptive NN control for continuous-time nonlinear MIMO systems with unknown control directions. IEEE Trans. Syst., Man, Cybern., Syst. 2020, 11, 4068–4077. [Google Scholar] [CrossRef]
  38. Xue, W.; Fan, J.; Lopez, V.G.; Jiang, Y.; Chai, T.; Lewis, F.L. Off-policy reinforcement learning for tracking in continuous-time systems on two time scales. IEEE Trans. Neural Netw. Learn. Syst. 2021, 10, 4334–4346. [Google Scholar] [CrossRef]
  39. Sun, C.; Li, X.; Sun, Y. A parallel framework of adaptive dynamic programming algorithm with off-policy learning. IEEE Trans. Neural Netw. Learn. Syst. 2021, 8, 3578–3587. [Google Scholar] [CrossRef]
  40. Duan, J.; Guan, Y.; Li, S.E.; Ren, Y.; Sun, Q.; Cheng, B. Distributional soft actor-critic: Off-policy reinforcement learning for addressing value estimation errors. IEEE Trans. Neural Netw. Learn. Syst. 2022, 11, 6584–6598. [Google Scholar] [CrossRef]
  41. Qiao, L.; et al. A novel optimal tracking control scheme for a class of discrete-time nonlinear systems using generalised policy iteration adaptive dynamic programming algorithm. Syst. Sci. 2017, 525–534. [Google Scholar] [CrossRef]
  42. Kiumarsi, B.; Lewis, F.L. Actor-critic-based optimal tracking for partially unknown nonlinear discrete-time systems. IEEE Trans. Neural Networks Learn. Syst. 2017, 140–151. [Google Scholar] [CrossRef] [PubMed]
  43. Zhang, H.; et al. Optimal tracking control for a class of nonlinear discrete-time systems with time delays based on heuristic dynamic programming. IEEE Trans. Neural Networks 2011, 1851–1862. [Google Scholar] [CrossRef] [PubMed]
  44. Zhang, H.; Wei, Q.; Luo, Y. A Novel Infinite-Time Optimal Tracking Control Scheme for a Class of Discrete-Time Nonlinear Systems via the Greedy HDP Iteration Algorithm. IEEE Trans. Syst., Man, Cybern., Syst. 2008, 937–942. [Google Scholar] [CrossRef] [PubMed]
  45. Song, S.; Zhu, M.; Dai, X.; Gong, D. Model-Free Optimal Tracking Control of Nonlinear Input-Affine Discrete-Time Systems via an Iterative Deterministic Q-Learning Algorithm. IEEE Trans. Neural Networks Learn. Syst. 2024, 1, 999–1012. [Google Scholar] [CrossRef]
  46. Huang, Y.; Liu, D. Neural-network-based optimal tracking control scheme for a class of unknown discrete-time nonlinear systems using iterative ADP algorithm. Neurocomputing 2014, 46–56. [Google Scholar] [CrossRef]
  47. Dierks, T.; Jagannathan, S. Optimal tracking control of affine nonlinear discrete-time systems with unknown internal dynamics. In Proceedings of the IEEE Conference on Decision & Control IEEE; 2010. [Google Scholar] [CrossRef]
  48. Zhang, J.; Ge, S.S.; Lee, T.H. Direct RBF neural network control of a class of discrete-time non-affine nonlinear systems. In Proceedings of the American Control Conference; 2002. [Google Scholar]
  49. Ge, S.S.; Zhang, J.; Lee, T.H. Adaptive MNN control for a class of non-affine NARMAX systems with disturbances. Systems & Control Letters 2004, 53, 1–12. [Google Scholar] [CrossRef]
Figure 1. The structure schematic of the proposed technique.
Figure 1. The structure schematic of the proposed technique.
Preprints 100240 g001
Figure 2. The unkonwn function f ( x 1 ) and approximating the unkonwn function f ˜ ( x 1 ) .
Figure 2. The unkonwn function f ( x 1 ) and approximating the unkonwn function f ˜ ( x 1 ) .
Preprints 100240 g002
Figure 3. The unkonwn function f ( x 2 ) and approximating the unkonwn function f ˜ ( x 2 ) .
Figure 3. The unkonwn function f ( x 2 ) and approximating the unkonwn function f ˜ ( x 2 ) .
Preprints 100240 g003
Figure 4. Control input u.
Figure 4. Control input u.
Preprints 100240 g004
Figure 5. The state trajector x 1 and the reference trajector x 1 d .
Figure 5. The state trajector x 1 and the reference trajector x 1 d .
Preprints 100240 g005
Figure 6. The state trajector x 2 and the reference trajector x 2 d .
Figure 6. The state trajector x 2 and the reference trajector x 2 d .
Preprints 100240 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated