Preprint
Review

Structural Identification of Control Objects—A Review

Altmetrics

Downloads

124

Views

35

Comments

1

This version is not peer-reviewed

Submitted:

05 September 2023

Posted:

06 September 2023

You are already at the latest version

Alerts
Abstract
The structural identification (SI) problem of control objects has not been solved. The formalization and interpretation complexity of the structure concept is the main problem. In identification systems, the form of the model (its structure) choice is intuitive and bases on the experience and knowledge of the researcher in most cases. The task of parametric identification is often interpreted as SI. It introduces certain confusion in understanding of the task and decision-making. This is two different areas of research. The structural identification problem is multifaceted and includes many subtasks, and their solution gives the final result. Some tasks have been solved. The purpose of this work is to review existing approaches and methods to the structural identification problem of control objects from a system perspective. It is necessary to give the SI problem statement at the multiple-informational level to reflect the difficulties of formalizing SI. New directions to analyse that were not SI areas until now.
Keywords: 
Subject: Physical Sciences  -   Mathematical Physics

1. Introduction

The identification problem occupies one of the central places. Parametric estimation (PE) and structure selection are the main directions of identification theory. PE is the basis of the identification theory and has now received the most complete development [1,2,3,4,5,6]. Structural identification (SI) has practically not been developed. This has several explanations. First, these are SI problem formalizing difficulties. They cause other problems, the solution of which many authors get on an intuitive level. This approach does not provide a general method and approach to solving a wide class of SI problems. Next, we consider problems that complicate the SI problem solution.
Despite on the SI complexity, many publications devoted to the consideration and study of the problem. One of the first reviews containing the method for the structure model selection proposed in [7]. Some aspects and approaches to structural identification are considered in [3,8,9,10]. Proposed methods base on a selective parametric approach. There are approaches to choosing the model structure that have not received proper coverage in the literature, but are effective.
This work purpose is to consider some SI tasks from a systemic perspective. We consider the structure concept, which is widely interpreted by various authors. These interpretations do not always reflect the structure essence when solving applied identification problems. Next, the structural identification problem formulation considers. Difficulties arising from solving the SI problem of static systems considered. We analyse methods, and approaches used to solve structural identification in various problems.

2. Structure Concept in Identification Tasks

The structure concept does not have a clear definition. This concept interprets very widely in modern science. Each knowledge area gives its own interpretation. B. Green [11] uses the mathematical structure (ST) concept, and interprets ST as any mathematical theory. In relation to living systems, ST [12] is a set of stable connections in an object. Structure (from the Latin structūra) or building is the internal organization of something. The internal building related to categories of the whole and its parts.
In the automatic control theory (TAS) [13,14], the structure is the mathematical equations set describing processes in the system. TAS uses the system concept with a variable structure [14]. Here, the structure interprets as a restructuring (change) of system parameters. In control systems, the block diagram concept widely uses, which reflects the system building (composition). The data structure in computer science is an analog of a block diagram in TAS.
The ST concept widely uses in mathematics [11,13]. This term is synonymous with various abstract concepts and categories. Often, the structure in the mathematical sense is a system of some equations. In the theory of systems [16], ST understands as the system building, the organization of its elements.
Highlight two features of the term structure interpretation. Firstly, the structure is treated as a complex system. As a rule, the structure covers many technical, social, and organizational systems. ST understands as an abstract category or set of objects used to describe and analyse complex processes and phenomena. This ST interpretation corresponds to the identification theory most adequately. We will adhere to this SI interpretation.
The identification theory considers in the wide and narrow sense [17]. ST is a research problem in a wide sense, i.e., the choice of the form for operators or equations describing the processes in the system. Note that the choice problem of the mathematical model structure under uncertainty is far from its final solution. The SI problem complexity consists in the absence of regular methods for model structure synthesizing. Note mathematical objects describing the structural indicators of the system do not exist. It is intuitively clear that there is a structure, but SI cannot be described in a formalized language. SI difficulties considered in this paper, analysed in [18].
Besides the above interpretation, ST understands as a mathematical object described by a functional mapping. Such ST has a graphical representation and describes the processes in the system in a generalized form. The specified object is called a virtual portrait defined in some space. Many authors interpret the SI concept from the perspective of applied problems (see e.g., [19,20,21]).

3. Structural Identification Problem Statement

m As noted in the introduction, the structural identification problem is difficult to formalize. We give one of the possible SI problem formulations based on the multiplicity-information approach [22].
Consider the system described by the equation
X ˙ ( t ) = F ( X , A , t ) + B u ( t ) , y ( t ) = C T X ( t ) + ξ ( t ) ,
where X m is state vector, F : m × k × J m is smooth continuously differentiable m -dimensional vector function, t J , y is output, u is input, A k is parameter vector, B m , ξ R is piecewise continuous bounded perturbation, С m .
A priori information
I a ( X , S , G S , u , ξ ) S S G S I a X I a u I a ξ
set containing available information about the vector function F S S structure, parameters A , B G S , characteristics of input, output, and perturbation.
The set S S contains the information about the operator class describing the system (1) dynamics, as well as some structural parameters A S . The I a level determines the A S cardinality. In identification tasks, the formation S S , A S bases on the experience and intuition of the researcher. Given the informalizability S S in (2), the subsets S S , A S cardinality can set fuzzy, and most often, is uncertain. Therefore, the SI problem solution complicates.
Experimental information
I o = u ( t ) , y ( t ) , t J = t 0 , t k
The perturbation ξ may have a different nature [23,24]. ξ is limited.
Consider the operator F ^ i ( ) m , which is a contender for the formation of the vector-function F ( X , A , t ) structure in (1). Let F ^ i ( ) S S and is parametrized accurate to a pair A ^ i , B ^ i A S S S . Apply the model
X ^ ˙ ( t ) = F ^ ( X ^ , A ^ , t ) + B ^ u ( t ) .
Problem: use a priori I a and experimental I o information and estimate the vector-function F structure in (1) to minimize cardinality of the set S S
arg min F ^ # S S = F * .
The fulfillment (3) is equivalent to the condition
arg min A ^ , B ^ # A S = A * , B * .
We do not specify the class of parametric identification methods, since set S S elements determine their type. The identification criterion # S S choice reflects the non-standard and complexity of the problem.
The structural identification issue is complex and requires solving many subtasks. An overview of the solution methods is given below. The structural identification issue is complex and requires solving much subtasks. We give a solution method overview below. Other statements of the SI problem know [7].

4. Requirements for Model Structure

Three groups of factors are decisive for the model structure choice [7]: 1) ensuring the maximum quality of the restored model; 2) the calculation amount minimizing in the model synthesis.

5. On SI Difficulties of Static Systems

Static frameworks (SF) propose in a special structural space [25,26]. SF analysis allows deciding about the nonlinearity class in static systems (SS). Explain the need to allocate a structural space for the nonlinear SS by following reasons [27]:
(i) the system output is an integrated quantity reflecting the influence of the input variable set;
(ii) the method absence of the input-output relationships identification in the nonlinearity type classification problem. Approaches to degree nonlinearity assessing of the system described in [1,4]. Nonlinearity degree estimating is the key to class choosing of nonlinear static models. The solution bases on the complex mathematical apparatus use, which does not always offer an approach to SI;
(iii) nonlinear structure a priori assignment [5,28,29], based on a set of existing inputs and a parametric approach, requires solving the multicollinearity problem [30,31]. The problem is complex, and the applied solution is simple. The solution is the result of the model practical implementation. A simple solution is the exclusion of dependent variables, which use very widely used in real control systems, but this solution closes the way to considering the SI problem. This relationship does not always understand and is not considered in existing methods;
(iv) application of parametric methods based on the polynomial given class use [28,29,32]. The effectiveness of this approach depends on the experience and intuition of the researcher. This approach requires preliminary labor-intensive research. It does not allow determining the nonlinearity structure explicitly.
These problems explain the existing state of SI static systems. Despite the apparent simplicity of the SS mathematical description, its analysis requires the design and application of non-trivial approaches and methods [26,27]. Designed methods should resolve existing contradictions. Contradictions have structural form and associate with multicollinearity, correlation, lag, etc. The SI problem is insoluble when the parametrization paradigm dominates. The parametric approach is a powerful tool for solving control tasks. The parametrization problem is secondary in SI tasks. This should understand when designing SI procedures under uncertainty. Only the model structure correct choice allows to go to solve the parametric estimation problem. Such a relationship is not always understood and requires the use of various procedures to compensate for emerging uncertainties. Note that collinearity (COL) is a problem that cannot solve by parametric methods. But this does not mean that COL is a “parasite” and it should not use in system structure assessment. COL is a structural indicator. Only the information set analysis of the system is a condition for obtaining structural indicators. This is a non-trivial task. But it has a solution. Its solution got in a structural space [25,26], which does not always contain system variables. A static framework [26] defined on a generalized variables and parameters set defines in this space. Framework changes analysis and evaluation of its parameters decided about nonlinear processes in the system.
Consider methods for evaluating the structural parameters of control objects. Evaluation of structural parameters bases on the indirect methods use.

6. Model order estimation

Different approaches used to estimate of the model order [1]. The following classification corresponds to the methods.
1. Spectral characteristics study of the transfer function (TF).
2. Checking the ranks of sample covariance matrices.
3. Correlation of variables.
4. Analysis of the information matrix.
1. Spectral-analytical estimates. If TF contains information about the of resonant peak magnitude, high-frequency slices, or phase shifts, then this information is the basis for the model order choose [33,34,35].
2. Checking the ranks of sample covariance matrices. This approach bases on the fact [1] that a regression model with the variables vector synthesizes. Implement the matrix
P q ( N ) = 1 N i = 1 N Φ q , i Φ q , i T ,
where N is data sample length, q is evaluation of the system order. If the system input is constantly excited, then the matrix P q ( N ) is non-degenerate at q m and degenerate at q > m . Therefore, det P q ( N ) [36] can use to statistical hypothesis test and decide about the system order. The relationship study between the matrix (4) degeneracy and the model order given in [37], and the algorithm for the estimation method implementing proposes in [36]. If there is interference, then enter a threshold for applying the matrix P q ( N ) to minimize the interference effect with a large amplitude. The matrix (4) modification proposes. If the signal-to-noise ratio is small, then in [36]
P ^ q ( N ) = P q ( N ) σ ξ 2 P ξ ,
where σ ξ 2 P ξ allows for the interference ξ effect on P q ( N ) .
Various models use the object order estimate under uncertainty. Models of various orders use for the object order estimate under uncertainty. Next, an error criterion [38] introduced, which allows selecting the model of required order. This approach is time-consuming and imposes certain requirements on the researcher qualifications. If the model order decides too small, then smoothed spectral estimates get. If the order is too large, then the spectrum resolution increases, and this leads to the appearance of false peaks in the spectrum. Therefore, if evaluating the autoregressive model order, then a compromise must observe between the resolution and the variance amount for classical spectral estimation methods. The variance residuals influence on the model order choice [38] as well. The Akaike criterion (AC) [38,39] uses to identify of the model order often. There are various AC modifications. The first criterion allows estimating the final prediction error. The autoregressive model order choice bases on minimizing of the error average variance at each prediction step. The criterion for the autoregressive process has the form [38]
Q 1 ( m ) = σ ^ m N + m + 1 N m + 1 ,
where N is data number, m is model order, σ ^ m is white noise dispersion estimation. Centered values of variables used to estimate the error variance in (6). The second multiplier on the right side (3) increases if the model order increases. This increases the error variance. Therefore, the model choice with the order m should give a minimum Q 1 ( m ) . The application Q 1 ( m ) gives an underestimated value of the model order [40,41].
The second Akaike criterion or Akaike Information Criterion (ICA) base on the maximum likelihood method. According to this criterion, the model order determines based on the minimization of some information-theoretic function. If the data have a normal distribution, then the ICA has the form [38,42]
Q 2 ( m ) = N ln σ ^ m + 2 m .
The second term in (7) characterizes the fee for the use of additional AR coefficients. Such representation of the criterion does not reduce the prediction error variance. The model order determined from the minimization condition Q 2 ( m ) . Criteria (6) and (7) are asymptotically equivalent for N . Q 2 ( m ) gives good results for ideal AR-processes. Q 2 ( m ) has the same problems as Q 1 ( m ) . ICA is statistically untenable [43], which overestimates the values of the model order at N . Following criteria used to evaluate the order of the model also:
  • Bayesian information criterion or Schwartz criterion [46]
    B I C = S C = m ln N 2 ln L ,
    where L is the maximum likelihood function value for the estimated model;
  • Hannan-Quinn Information Criterion [47]
    H Q C = N ln E C C N + 2 m ln N ,
    where E C C is the squared deviation sum.
  • There know modifications of criteria (6) - (9), which are used for the synthesis of various models.
We do not consider the interesting direction of choosing the model order for a system with lag variables (LPS). The LPS main problem is to minimize the number of model parameters, and, consequently, the model order. Various schemes for approximating the vector of model parameters proposed for LPS under a priori information. Parametric schemes of I. Fischer [50], L.M. Koyck [30], S. Almon [51] (a modified Fischer model) widely use. Durbin-Watson and von Neumann statistics [30] used to verify the of the model order estimation. Under uncertainty, the use of these schemes is associated with solving problem series. A functionally multiple approach proposes for the structural identification of discrete LPS in [52]. The decision on the LPS structure bases on the analysis of special frameworks (SF). It is shown that the distributed lag can be interpreted as nonlinearity. The second order secant for SF is a criterion for deciding on the lag length. The approach [52] applies for choosing the model order containing lags for input, output variables, and their combinations.
3. Correlation of variables (VC). VC related to the choice of variables included in the model. Choice of variables indirectly affect the model order. The problem solves by determining a correlation between the output and the candidate-variable to include in the model. Correlation indicators are an indicator for including lag variables in the autoregressive model. The method of canonical correlations or partial correlation [31,48,49] is the basis for an indirect order estimation.
4. Analysis of the information matrix. The information matrix P q ( N ) is the analysis main object in the modern theory of identification. Properties P q ( N ) influence on the parametric estimation problem solution. As shown in [1], if estimates of the model order overestimate, then the system is unidentifiable. Hence, the matrix P q ( N ) has an incomplete rank. The model order revaluation issues based on the analysis of the information matrix consider in [53,54,55,56].
Remark 1. The constant excitation property P E α , α ¯ S associates with the information matrix P q ( N ) . T3 determines the evaluating possibility of system parameters. As recent studies show [57], the P E α , α ¯ S property does not guarantee the structural identification and identifiability of a nonlinear system. P E α , α ¯ S should guarantee the system S-synchronizability.
Other approaches apply to estimate the model order [58]. They not coincident with the considered paradigm. They give the indirect estimate of the system order. The S e y -framework method (SFM) uses to estimate the order of a dynamic system. SFM consider below.

7. System nonlinearity degree

The model structure choice bases on the system linearity (nonlinearity) class evaluation. The system nonlinearity degree estimation bases on the use of correlation and variance analysis [59].
In the general case, the regression between the output Y ( t ) at time t and the input X ( s ) at time s of the system be some curve. Estimate the regression M Y t X s nonlinearity as the mean square deviation of this curve from some straight line. Evaluate the regression Y t nonlinearity degree relative to X s as [59]
n y x ( t , s ) = min a ( t , s ) b ( t , s ) M M Y t X s a ( t , s ) + b ( t , s ) x s 2 D Y ( t ) ,
a ( t , s ) = M X t R x x ( t , s ) D X ( t ) D X ( s ) M X s ,
b ( t , s ) = R x x ( t , s ) D X ( t ) D X ( s ) ,
where D Y ( t ) is variance of a random function Y ( t ) , M Y t X s is conditional mathematical expectation of a random function y ( t ) relative to a random function x ( s ) , R x x ( t , s ) is normalized correlation function.
The minimum in (10) is reached for a ( t , s ) and b ( t , s ) satisfying (11), (12). We obtain the nonlinearity degree estimate from (10) - (12)
n y x 2 ( t , s ) = η y x 2 ( t , s ) R y x 2 ( t , s ) .
В [1] oтмечается, чтo oценка степени нелинейнoсти служит для принятия решения o применении линейнoй или нелинейнoй мoдели. Further, the correlation and variance analysis development given in [60,61,62,63,64], where higher-order correlation functions used to calculate the of nonlinearity degree. The review [64] contains the current state of this problem.
The approach to nonlinear static system structure choosing is proposed in [25]. It bases on the identification power evaluation for the system structural coefficient. The identification power is an assessment indicator of the nonlinearity degree.

8. LPS structural identification

Models with lag variables (LV) widely use in econometrics and economics [30,63,64,65], engineering [25,66,67] and medicine [68,69,70]. The delay can have independent or dependent variables. LV considering leads to autocorrelation between variables [29,61,63] and complicates the process of identifying system parameters. The discrete system model structure choice bases on various schemes of approximation parameter at LV [30,50,51]. This approach reduces the number of estimated parameters [71].
The approach to structural identification based on approximation schemes of parameters for LV, widely use in econometrics. The system model with LV at input has the form [65]
y n = i = 0 k ω i x n i + ξ n ,
where ω i are constants that are not equal to zero at the same time, x n is input variable, ξ n is a random variable independent of x i and haves zero mean and constant variance.
As a rule, the parametric scheme choice bases on a priori information. The scheme chose so that the input variable influence at time n = 0 is the most significant. So, the sequence ω 0 , ω 1 , , ω k [63] containing the first terms of the series is increasing. If the maximum ω 0 , ω 1 , , ω k reached, then the sequence should decrease. Considering this, I. Fischer [63,70,72,73] proposed changing M2 based on a decreasing arithmetic progression
ω i = ω 2 1 i 2 k 1 ,   2 i k .
where ω 0 , ω 1 are any numbers, q > 0 , i q i = 1 . So, the task reduced to the evaluation of parameters ω 0 , ω 1 , ω 2 and k . S. Almon [52,68,69] modified I. Fischer’s model applied the polynomial law for coefficient variation.
The Koyck’s scheme is widely used [30,73]. Model coefficients change on decreasing geometric progression
ω i = ω 2 q i 2 ,   i = 2.3 , , .
Schemes (14), (15) are applicable to the time series only when ω i decrease, starting from the first terms. Laws (13), (14) do not work on small samples.
Conditions for q i can be interpreted as some probability distribution on a set of non-negative integers. Therefore, q i can be formally considered as the probability assigned to the integer i . This idea gave direction to the development of parametric laws for ω i based on probability theory on non-negative integers. The scheme with a logarithmic normal distribution proposes for q i [72]. q i interprets as the probability that a normal variable with mean μ and mean square deviation σ belongs to ln ( i ) , ln ( i + 1 ) . The distribution laws assignment for q i considers in [74]. Other approaches to the construction of parametric schemes for LV describes in [65,72,75,76].
Considered parametric schemes minimize the number of unknown parameters. The least squares method or its modifications use to estimate system parameters [30,63,64,65]. The model structure sets a priori and the parametric identification problem solves. An interactive algorithm [77] for parameters estimating static system with LV proposes. The lag length set and parametric schemes are not applied. The maximum lag length choice based on the analysis of residues considers in [63,74,78]. The a priori uncertainty influence on the structure selection and parameters of the system has not studied.
In [26,79], the structural identification method proposes for systems with a distributed lag under uncertainty. The method bases on the use of virtual frameworks (VF) reflecting of system properties [26]. The object linearity criterion introduces. Algorithms for distributed lag maximum length determining propose. They do not require the calculation of statistics. The Durbin-Watson criterion analogue for this case obtains. Changing laws of the system parameters not set a priori. The lag length choice bases on the VF analysis [79,80]. The proposed approach is generalized to a class of autoregressive models.

9. Structural identifiability of systems

The identifiability problem relates to the estimate possibility parameters of a dynamic system. The approach to the identifiability assessment bases on the R. Kalman ideas [81]. Further development of these ideas gives in [37,83]. R. Lee [37] gives the following definition of identifiability.
Consider the system
X n + 1 = A X n , y n = C T X n ,
where X n m is vector state, A m × m , y n is output system, n = J n = 0 , N is discrete time.
Problem: determine the conditions under which the system is identifiable based on I o = y n , n = 0 , N ¯ , N < .
For the case y n m , the following sufficient and necessary conditions are given in [37].
Definition 1. The system (16) is identifiable if the matrix A is n -determined based on the vector X measurement.
Definition 2. The system (14) is called 1-identifiable if the matrix A is determined based on the measurement y .
The n -identifiability condition is satisfied if the matrix X 0 A X 0 A 2 X 0 A m 1 X 0 is non-degenerate.
The 1-identifiability condition: 1. System (16) is n -identifiable. 2. The pair A , C is observable.
In [37,79], the identifiability case is considered when the dynamical system order is less than m .
The analysis of publications shows that identifiable evaluation of the system (16) performed in a parametric space. Call it IP-Identifiability (IPI). The IPI is being studied by many authors. The identifiability results [37,82,84] are presented in the form accepted in parametric estimation issues.
The concept of structural identifiable, not based on IPI, introduces in [84]. Consider two dynamic systems S 1 U 1 , Y 1 , A 1 , S 2 U 2 , Y 2 , A 2 with inputs U 1 , U 2 , outputs Y 1 , Y 2 and parameters A 1 , A 2 . The systems describe by models M 1 U 1 , Y ^ 1 , A ^ 1 and M 2 U 2 , Y ^ 2 , A ^ 2 .
Definition 3 [84]. If condition M 1 A ^ 1 M 2 A ^ 2 hold for U 1 = U 2 , Y 1 = Y 2 and A ^ 1 A ^ 2 , then the M 1 , M 2 models are indistinguishable from the observed inputs and outputs.
Definition 4 [37,84]. The parameter a ^ 1 , i A ^ 1 calls structurally globally identifiable if the condition M 1 A ^ 1 M 2 A ^ 2 a ^ 1 , i = a ^ 2 , i satisfies for almost any A ^ 2 Ω P , where Ω P is parametric space.
Definition 5 [84]. The parameter a ^ 1 , i A ^ 1 is structurally locally identifiable if a neighbourhood O 2 A ^ 2 exists such that M 1 A ^ 1 M 2 A ^ 2 a ^ 1 , i = a ^ 2 , i follows from the condition A ^ 1 O 2 A ^ 2 for almost any A ^ 2 Ω P .
Local identifiability is a necessary condition for global identifiability. A parameter that is structurally locally identifiable but is not structurally globally identifiable is called structurally globally unidentifiable. Various approaches and methods can use to verify structural identifiability [85,86]. The concept of local parametric identifiability and local identifiability at a point introduce in [87] and given its theoretical justification.
Remark 2. Most of the works devoted to identifiability do not consider the SI problem. Therefore, the structural identifiability concept does not reflect the SI problem essence. This terminology actively uses in tasks of assessing identifiability. Therefore, we use this terminology to continue analysing obtained results. Next, a concept introduces that directly relates to the structural identifiability of nonlinear systems in a structural space.
In [87], criteria for assessing the local identifiability of a linearized system (16) proposes. The state matrix rank is m . For a heterogeneous linear system, the method for estimating local identifiability based on the evaluation of the Lyapunov exponents has developed. In [88], parametric identifiability criteria introduce and results obtained in [87] generalized. Complete identifiability conditions of a linear stationary system from discrete measurements are obtained in [89,90].
The IPI of nonlinear systems study by many authors (see, for example, [89,90,91,92]). Approach [90] based on the system output sensitivity study uses for the identifiability analysis. The approach effectiveness shows in the identifiability study of a parameter combination. In [89], parametric identifiability local conditions were obtained for various measurement variants of experimental data. In [89], local conditions of parametric identifiably obtain for set experimental data. Joint observability and identifiability conditions of a linear stationary system consider. A critical analysis of the approaches used to the identifiability estimate of biological models give in [91]. Models for identifiability estimation of nonlinear systems based on Taylor series expansion, identifiability tables, differential algebra consider. Analysis of practical identifiability (PRI) gives in [92]. PRI is based on the experimental information analysis and the differential algebra application. PI bases on the least squares method, simulation results, and the sensitivity analysis of the model to parameter estimates. The approach is applied to the biology tasks.
In [93], identifiability issues of a model described by a system of simultaneous equations consider. The concept of observably equivalent systems with structure S introduces. The concept of an identifiable parameter in the S -structure of the system introduces. The state matrix rank is the identifiability condition.
So, the model identifiability understands as the possibility of evaluating its parameters. The proposed methods base on the information matrix non-degeneracy evaluation. Similar results obtained in the parametric estimation theory. Checking the non-degeneracy (completeness of the rank) of the matrix bases on ensuring the excitation constancy for the input and output of the system. As a rule, the model structure sets a priori. Therefore, it is not always clear how to understand structural local identifiability. The structure concept widely uses in the assessment of identifiability. The nonlinear system identifiability reduces to parametric identifiability problem based on the application of various linearization methods. This approach does not consider the structural identifiability problem and does not answer the question: how to decide about the nonlinear system structure under uncertainty? The task was not set in this form. The answer to this question bases on the VF analysis and the h -identifiability concept [94].
Consider the system
X ˙ = A X + B φ φ ( y ) + B u u , y = C T X ,
where u , y are input and output; A q × q , B u q , B φ q , C q ; φ ( y ) is scalar nonlinear function. A is the Hurwitz matrix.
Information set
I o = u ( t ) , y ( t ) , t J = t 0 , t k .
Problem: estimate the system (17) structural identifiability (IS) based on the I3 analysis and processing.
Remark 3. It is shown in [57], the problems of structural identification and structural identifiability for nonlinear systems interrelate. Structural identification follows from structural identifiability.
The IS estimation bases on the framework S e y analysis that reflects properties of the nonlinear part (17). The constructing S e y method describes in [94,95]. The analysis S e y related to the IS problem solution for the system (17). To distinguish the approach described below from IPI-identifiability, we use the term h -identifiability (HI) below.
Definition 6. Call the input u ( t ) representative if:
  • The set I o provides a solution to the parametric identification problem.
  • Input u ( t ) provides an informative framework S e y I N , g .
If u ( t ) is representative, then S e y is closed. Denote the S e y height ask h S e y where height is the distance between two points on opposite sides of the framework S e y . Let I N , g is a set for making a decision about the system structure. Its construction method is described in [94,95].
Statement 1 [96]. Let (1) the system (17) linear part is stable, and the nonlinearity φ ( ) belongs to the sector k 0 , k 1 ; (2) u ( t ) input is bounded, piecewise continuous, and continuously excited; (3) there is δ S > 0 δ S > 0 such that h S e y δ S . Then the framework S e y is identifiable on the set I N , g .
Definition 7. The S e y -framework satisfying conditions of statement 1 is HI.
The features of the h -identifiability concept consider in [94].
Remark 4. Not every input satisfying the P E α , α ¯ S condition guarantees the system SI. In particular, the input can give the so-called "insignificant" S e y -framework ( N S e y -framework) [94].
The decision on SI (structural identifiability) bases on checking the S-synchronizability [57] condition of the system. The verification bases on the analysis of S e y structure properties.
Definition 8. Input u ( t ) U is called S-synchronizing system (17) if the definition domain D y of the framework S e y has a maximum diameter D y on the set y ( t ) , t J .
S-synchronizability guarantees structural identifiability (SI) or the system (17) h δ h -identifiability.
Definition 8 shows if the system (17) is h δ h -identifiable, then the framework S e y has a maximum diameter of the region D y . The criteria of h δ h -identifiability verifying for the system (17) presents in [94,95,116,117].

10. Identification and identifiability of Lyapunov exponents

Lyapunov exponents (LE) are widely used to analyse the qualitative behaviour of dynamical systems. LE gives behaviour estimates of systems and processes in physics [97], medicine [98], economics [99], astronomy [100]. Most often, LE is determined based on time series analysis. It is believed that a priori information about the system structure is known. The conclusion made about the structural features of the system on the LE analysis basis. The emergence of systems with changing structural properties is an impetus to the development of research on LE. The main focus is on calculating the largest (highest, maximum, first) LE (LLE).
An LLE calculation overview presented in [101] for various classes of systems. An LE estimation algorithm proposes for an unknown dynamical system in [102]. It calculates all LE.
Various algorithms applied to LLE calculate the of non-stationary systems based on experimental data. The application of these algorithms is based on the Takens theorem [103]. F. Takens showed that the system phase portrait (attractor) can restore (reconstructed) based on a single time series (experimental data). Therefore, the theorem is the basis for calculating various indicators of a dynamic system. LE has such the indicator. Estimate LLE obtains using the Wolf [104] and Rosenstein [105] methods (RWM). Many authors generalize and develop RWM. LLE is calculated on the basis of logarithm and interpolation of a time series [106]. It is shown that the best results for stationary systems obtain by the Rosenstein method and the interpolation algorithm, and for non-stationary systems by the interpolation algorithm. A model [106] containing the product of an exponent and a phase-shifted sine wave uses to compensate for the non-stationary component in the data. This procedure is not applicable for LE identification to non-stationary systems, since it removes a valuable information layer. Note that the Rosenstein method [105] is a time-consuming procedure associated with the selection and refinement of system parameters. In [107], a neural network algorithm proposes for LLE estimating. It bases on the use of a multilayer perceptron.
Two main methods are used to evaluate Lyapunov exponents on a time series [108]. These procedures base on a previously reconstructed attractor by the Takens method. The first method [104] determines two close trajectories in the reconstructed phase space and tracks their behaviour over a certain time interval (the Benettin algorithm [109]). The Lyapunov exponents spectrum (LES) evaluation performs similarly to the LE estimation according to the original system of equations together and the equations in variations. The main advantage of this method is its relative simplicity, and the disadvantage is the identifying difficulty of the full LES.
The determining role when considering two close trajectories plays by the LLE. The second method [110,111] bases on the Jacobian calculation, since the LE can define as the eigenvalues of the Jacobi matrix for the system that generated the time series. The advantage of this method is the estimation of non-negative LE for the short time series, and the disadvantage is high sensitivity to noise and errors.
The Takens theorem application depends on the properties of the time series [112]. The properties of the series effect on the effectiveness of the criteria used to evaluate attractor. This explains the implementing complexity of LE identification methods.
So, modifications of Rosenstein, Benetton, Wolf methods and the Takens theorem widely use to LE identification of stationary systems. The properties of the time series describing variables of the system effect on the accuracy of the obtained LE estimates. Various modifications that consider a priori information to simplify the use of these methods. As a rule, the methods give an estimate for LLE. Non-stationary systems (NSS) have their own peculiarity [113]. In particular, they contain the spectrum of Lyapunov exponents. Therefore, further modification of approaches and methods discussed above is required for NSS. Not always, criteria offer for verifying the received solutions.
An approach to the LE identification based on the VF analysis proposes in [114,115]. Frameworks describe the LE dynamics in the stationary dynamical system under uncertainty. The proposed approach essence. Consider the system
X ˙ = A X + B u , y = C T X ,
where X m is state vector, u , y are input and output of system, A m × m , B m , C m , A is the Hurwitz matrix.
Analyze the information set
I o = y ( t ) , u ( t ) , t J = t 0 , t 1
and obtain vector X ^ g ( t ) = y ^ g ( t ) y ^ ˙ g ( t ) T , where y ^ g ( t ) , y ^ ˙ g ( t ) are estimates of the system (18) free motion by output and derivative.
Apply formulas to determine LE [110]
χ y ^ g = lim ¯ t t ¯ ln y ^ g ( t ) t , χ y ^ ˙ g = lim ¯ t t ¯ ln y ^ ˙ g t ,
where lim ¯ t is limit superior, t ¯ J g is the t maximum value (upper bound) on the interval J g J .
Introduce functions t J ¯ g J g
ρ y ^ g = ρ g = ln y ^ g ( t ) n ! r ! n r ! ,   k s ( t , ρ ) = ρ y ^ g t ,
J ¯ g = t 0 , t ¯ determined in accordance with (19).
Consider the sets
I k s = k s t , ρ y ^ g ( t ) , t J ¯ g ,   I k s = k s t , ρ y ^ ˙ g ( t ) , t J ¯ g ,
introduce representation S k s , ρ I k s × I k s and a function describing the change of the first difference k s ( t , ρ ( y ^ ˙ g ( t ) ) on the set I k s
Preprints 84284 i001
where τ > 0 .
Form the set I Δ k s = Δ k s t , ρ y ^ ˙ g ( t ) , t J ¯ g and introduce the mapping (framework) S K Δ k s , ρ I k s , ρ × I Δ k s , ρ . Consider the L S K Δ k s , ρ I k s , ρ × B I Δ k s , ρ transformation for the S K Δ k s , ρ framework, where B I Δ k s , ρ 1 ; 1 . Elements of the binary set
b ( t ) = 1 , i f Δ k s ( t ) 0 , 1 , i f Δ k s ( t ) < 0 , t J ¯ g .
Theorem 1 [115]. The system (18) has the order m if the function b ( t ) changes sign m 1 times on the interval t 0 , t * J ¯ g   t * t ¯ .
Form the set I k s i = k s t , ρ y ^ g ( i ) ( t ) , t J ¯ g , where i denotes the i -th derivative y ^ g ( t ) , and introduce the mapping S K k s , ρ i I k s × I k s i . We obtain the spectrum structure for system (18) matrix A eigenvalues [113]. The lower Lyapunov exponents are Perron exponents (PE). Algorithms for PE calculating are given in [114].
An important problem related to the LE structural identification is the ability to detect and identify LE. This problem has not been raised or discussed. For the first time, the problem formulation and methods of its solution were proposed for linear and non-stationary systems in [57,116,117].

11. Identification of parametric constraints in static systems under uncertainty

Parametric constraints (PC) are the basis for creating effective identification and management systems. In identification systems, the consideration of constraints reflects the actual operating conditions and guarantees the use of robust parametric estimation algorithms. Therefore, PC are the structure element of the system.
Many authors study the application of parametric constraints in identification systems. The general method of moments (GMM) [118] is used to design the model for estimating the main asset. Two kinds of restrictions impose. These are restrictions on the variable current value and on the maximum value for the variable in a certain interval. The limit on moment set a priori. GMM [119,120,121] uses to determine the relationship between PC and parameter stability. The constraint is the equality at the moment. In [122], the electromagnetism process identification problem study, considering the physical limitations on the parameters. The problem solution given by the Levenberg-Marquardt iterative algorithm. Constraints set a priori as lower and upper bounds for parameters. The correcting problem of boundaries of constraints considered. The parametric identification problem [123,124] study for a dynamic object with ellipsoidal parametric constraints. A heuristic algorithm for solving the problem is proposed. As shown in [123], there is some indefinite quadratic constraint, depending on the level of uncontrolled noise in the finite data. Constraints are set a priori. The parameter estimation of the polynomial model, based on an ellipsoidal algorithm, solves in [125]. Process analysis is the basis for setting constraints on parameters. A heuristic algorithm uses. A static object identification considers in [126]. The domain of parametric constraints sets a priori. The a priori defined PC area influence on the properties of the model describing electrical muscle stimulation gives in [127]. It is noted that ignoring PC effects on results of parametric evaluation. The correction case of restrictions considers. It is shown that consider constraints improves the predictive properties of the model. The identification algorithm of a dynamic object with a priori set PS in the equality form proposes in [128]. The identification algorithm bases on the variational method of optimizing the Hamilton function. In [129], the identification procedure of system with feedback proposes. The procedure improves parameter estimates based on the use of constraints. It is shown how to form constraints based on a priori data about the object. The using a priori knowledge problem about an object considers in [130]. A priori information presents as constraints for a nonlinear system. In [131], the problem of estimating parameters studies on base quadratic constraints. Physical assumptions and the regularization method used to form constraints. Identification algorithms proposed, and their implementation shown. In [132], the PC estimation problem for vibration damping in nonlinear control systems considers. The analytical method is used to determine the constraints. The method bases on the selection of characteristic polynomial roots for a closed system of the 4th and 5th orders. The system model is set a priori.
Some approaches to PC obtaining consider in [26,58]. A method for obtaining PC for a dynamic linear system is described. It is based on the analysis of the observed information portrait. The domain of parametric constraints (PCD) interprets as an inequality from above on the parameter vector norm. Its assessment bases on the majority model.
The analysis shows that PC often use in the synthesis of identification systems. Often, PCD sets a priori. APO correction algorithms have a heuristic form. The PCD construction task was not considered under uncertainty. There is a class of complex objects, and we cannot set the PCD a priori for them. The solution to this problem is relevant for these systems.
In [26,133], the PCD construction approach proposes for static systems under uncertainty. Various approaches consider for describing the PCD. They base on the dominance concept and the analysis of special structures for linear static systems. Various methods propose for solving the OPO synthesis problem in [26,134]: (a) dominance condition verification and application of a criterion based on the average value evaluation of variables; (b) the model parameter vector adjustment in combination with approach (a); (c) a finite-convergent algorithm for determining PCD parameters. Majorizing estimates obtain for PCD. If a disturbance act, then the dominance acceptable level concept is the basis for decision-making about PCD.

12. Approaches to choosing model structure

In identification theory, approaches based on the parametrization are widely used to select the model structure. The implementation of this methodology base on the use of various approximation schemes [135,136,137,138,139]. The autoregressive model structure choice bases on the use of Volterra series and group method of data handling [136]. Combined schemes [6,140] used to increase the of decision-making efficiency.
The review [7] contains an analysis of methods for selecting the model structure based on the use of training and examination signals. Decision-making bases on the application of various criteria. Note that applied approaches use a priori information. An iterative approach [2] proposes for the design of forecasting and management models. The algorithm implements structural identification on a given class of models. The structure estimating problem of the autoregressive model on a defined class considers in [6]. Various criteria for selecting the class of models and rules for testing hypotheses considers. The curve linearization method [141] uses to select the structure of regression models. The linearization method of static dependence in structural space proposes in [24,25]. Other approaches to the structural identification of linear dynamical systems consider in [7]. VF ( S e y -framework) analysis method [18,79,94,95,114,115,142] is the most adequate approach to choosing the model structure.

13. Conclusion

Structural identification various aspects considered. We show that the problem is multifaceted and includes various directions. There are many approaches to solving various aspects of the structural identification problem. Most of them based on consideration of a priori information. Indirect approaches are the basis for evaluating the model structure. It makes difficult to design a common approach to solving this problem.

References

  1. Ljung, L. System identification: Theory for the User. Prentice Hall PTR, 1999.
  2. Box, G.P.; Jenkins, G.M Time series analysis: Forecasting and control. Holden-Day, 1976.
  3. Eickhoff P. Fundamentals of identification of control systems. John Wiley and Sons Ltd, 1974.
  4. Raibman N.S., Chadeev V.M. Construction of models of production processes. Moscow: Energiya, 1975.
  5. Graupe D. Identification of systems. Litton Educational Co., 1975.
  6. Kashyap R. Rao A. Dynamic stochastic models from empirical data. New York: Academic Press, 1976.
  7. Perelman, I.I. Model structure selection Methodology in control objects identification. Automation and telemechanics. 1983, 11, 5–29. [Google Scholar]
  8. Raibman, N.S. Identification of control objects (Review). Automation and telemechanics. 1978, 6, 80–93. [Google Scholar]
  9. Giannakis, G.B.; Serpedin, E. A bibliography on nonlinear system identification. Signal Process. 2001, 81, 533–580. [Google Scholar] [CrossRef]
  10. Ljung L. System Identification Toolbox User’s Guide. Computation. Visualization. Programming. Version 5. The MathWorks Inc., 2000.
  11. Greene B.R. The fabric of the cosmos: space, time and the texture of reality. New York: Random House, Inc., 2004.
  12. New Philosophical Encyclopedia: In 4 volumes / Edited by V.S. Stepin. Moscow: Mysl, 2001.
  13. Mathematical Encyclopedia / Edited by I. M. Vinogradov, vol. 2, D–Koo. Moscow: Soviet Encyclopedia, 1979.
  14. Besekersky V.A., Popov E.P. Theory of automatic control systems. Third edition, revised. Moscow: Nauka, 1975. N: Moscow, 1975.
  15. Aho A.V., Hopkroft V., Ulman D.D. Data structures and algorithms. Addison-Wesley, 2004.
  16. Van Gig J. Applied general systems theory. Harper & Row, 1978.
  17. Modern Identification Methods. Ed. by P. Eykhoff. John Wiley and Sons 1974.
  18. Karabutov, N.N. About structures of state systems identification of static object with hysteresis. International journal sensing, computing and control. 2012, 2, 59–69. [Google Scholar]
  19. Koh C. G., Perry M.J. Structural identification and damage detection using genetic algorithms. CRC Press, 2010.
  20. Kerschen, G.; Worden, K.; Vakakis, A.F.; Golinval, J.-C. Past, present and future of nonlinear system identification in structural dynamics. Mech. Syst. Signal Process. 2006, 20, 505–592. [Google Scholar] [CrossRef]
  21. Sirca, G.F., Jr.; Adeli, H. System identification in structural engineering. Sci. Iran. 2012, 19, 1355–1364. [Google Scholar] [CrossRef]
  22. Karabutov, N. Geometrical Frameworks in Identification Problem. Intell. Control. Autom. 2021, 12, 17–43. [Google Scholar] [CrossRef]
  23. Isermann R., Münchhof M. Identification of dynamic systems. Springer-Verlag Berlin Heidelberg. 2011. ISBN 978-3-540-78878-2.
  24. Kuntsevich V.M., Lychak M.M. Synthesis of optimal and adaptive control systems: A game approach. Kiev: Naukova dumka, 1985.
  25. Karabutov N.N. Structural identification of systems: Analysis of information structures. Мoscow: URSS, 2016.
  26. Karabutov N.N. Structural identification of static objects: Fields, structures, methods. Moscow: URSS, 2016.
  27. Karabutov, N. About structures of state systems identification of static object with hysteresis. Int. J. Sensing, Computing and Control. 2012, 2, 59–69. [Google Scholar]
  28. Mosteller F., Tukey J.W. Data Analysis and Regression: A Second Course in Statistics. Addison-Wesley. 197.
  29. Boguslavsky I.A. Polynomial approximation for nonlinear estimation and control problems. Moscow: Fizmatlit, 2006.
  30. Johnston J. Econometric Methods. McGraw-Hill, 197.
  31. Draper N.R., Smith H. Applied Regression Analysis. John Wiley & Sons. 2014.
  32. Proceedings of the IX International Conference “System Identification and Control Problems” SICPRO’12 Moscow January 28-31, 2008. V.A. Trapeznikov Institute of Control Sciences. Moscow: V.A. Trapeznikov Institute of Control Sciences. 2012. 28 January.
  33. Britenkov A.K., Dedus F.F. Prediction of time sequences using a generalized spectral-analytical method. Mathematical modelling. Optimal control. Bulletin of the Nizhny Novgorod University named after N.I. Lobachevsky. 2012; 28-32.
  34. Marple Jr S. L. Digital spectral analysis: With applications. Prentice-Hall, 1987.
  35. Prokhorov S.A., Grafkin V.V. Structural and spectral analysis of random processes. Moscow: SNC RAS, 2010.
  36. Woodside, С.М. Estimation of the order of linear systems. Automatica 1971, 7, 727–733. [Google Scholar] [CrossRef]
  37. Lee R. Optimal estimation, identification, and control. MIT Press, 1964.
  38. Jenkins G.M., Watts D.G. Spectral analysis and its applications. Holden-Day, 1969.
  39. Goldenberg L., Matiushkin B.D., Poliak M. Digital signal processing: Handbook. Moscow: Radio and Communications, 1990.
  40. Berryman, J.G. Choice of operator length for maximum entropy spectral analysis. Geophysics 1978, 43, 1384–1391. [Google Scholar] [CrossRef]
  41. Jones, R.H. Autoregression order selection. Geophysics 1976, 41, 771–773. [Google Scholar] [CrossRef]
  42. Kay S.M. Modern spectral estimation: Theory and application. N. J.: Prentice-Hall, Inc., Englewood Cliffs. 1999. 543 p.
  43. Kashyap, R.L. Inconsistency of the aic rule for estimation the order of autoregressive models. IEEE Trans. Autom. Control 1980, AC-25, 996–998. [Google Scholar] [CrossRef]
  44. Hastie T., Tibshirani R., Friedman J. The elements of statistical learning. Springer, 2001.
  45. Stoica, P.; Selen, Y. Model-order selection: a review of information criterion rules. Signal Processing Magazine. 2004, 21, 36–47. [Google Scholar] [CrossRef]
  46. Wagenmakers, E.-J. A practical solution to the pervasive problems of p values. Psychonomic Bulletin and Review. 2007, 14, 779–804. [Google Scholar] [CrossRef]
  47. Hannan, E.J.; Quinn, B.G. The Determination of the Order of an Autoregression. J. R. Stat. Soc. Ser. B (Methodological) 1979, 41, 190–195. [Google Scholar] [CrossRef]
  48. Dubrovsky A.M., Mkhitaryan V.S., Troshin L.I. Multidimensional statistical methods. M.: Finance and Statistics 200.
  49. Hamilton J.D. Time Series Analysis. Princeton University Press, 1994. 813 p.
  50. Malinvaud E. Statistical methods in econometrics. 3d ed. Amsterdam: North-Holland Publishing Co, 1980.
  51. Almon, S. The Distributed Lag Between Capital Appropriations and Expenditures. Econometrica 1965, 33, 178. [Google Scholar] [CrossRef]
  52. Karabutov, N. Structures, Fields and Methods of Identification of Nonlinear Static Systems in the Conditions of Uncertainty. Intell. Control. Autom. 2010, 1, 59–67. [Google Scholar] [CrossRef]
  53. Mehra, R.K. Optimal input signals for parameter estimation in dynamic systems. A survey and new results. IEEE Trans. Automatic Control 1974, AC-19, 753–768. [Google Scholar] [CrossRef]
  54. Soderstrom, T. Comments on “Order assumption and singularity of information matrix for pulse transfer function models”. IEEE Trans. Autom. Control 1975, 20, 445–447. [Google Scholar] [CrossRef]
  55. Stoica, P.; Söderström, T. On non-singular information matrices and local identifiability. Int. J. Control. 1982, 36, 323–329. [Google Scholar] [CrossRef]
  56. Young, P.С.; Jakeman, A.J.; McMurtrie, R. An instrumental variable method for model order identification. Automatica. 1980, 16, 281–294. [Google Scholar] [CrossRef]
  57. Karabutov N.N. Introduction to the structural identifiability of nonlinear systems. Moscow: URSS/LENAND, 2021.
  58. Karabutov N.N. Adaptive identification of systems: Information synthesis. Moscow: URSS, 2016.
  59. Raibman, N.S.; Terekhin, A.T. Dispersion methods of random functions and their application for the study of nonlinear control objects. Automation Telemechanics 1965, 26, 500–509. [Google Scholar]
  60. Billings S.A. Structure Detection and Model Validity Tests in the Identification of Nonlinear Systems. Research Report. ACSE Report 196. Department of Control Engineering, University of Sheffield, 1982. [CrossRef]
  61. Haber, R. Nonlinearity Tests for Dynamic Processes. IFAC Proc. Vol. 1985, 18, 409–414. [Google Scholar] [CrossRef]
  62. Hosseini, S.M.; Johansen, T.A.; Fatehi, A. Comparison of nonlinearity measures based on time series analysis for nonlinearity detection. Modeling Identification and Control 2011, 32, 123–140. [Google Scholar] [CrossRef]
  63. Malinvaud E. Méthodes statistiques de l’économétrie. Deuxième édition. London-Paris, 1969.
  64. Demetriou, I.C.; Vassiliou, E.E. An algorithm for distributed lag estimation subject to piecewise monotonic coefficients. International Journal of Applied Mathematics 2009, 39, 1–10. [Google Scholar]
  65. Dhrymes P.J. Distributed Lags: Problems of estimation and formulation. San Francisco: Holden-Day, 1971.
  66. Gershenfeld N. The Nature of Mathematical Modelling. Cambridge: Cambridge University Press, 1999.
  67. Linear Least-Squares Estimation, Stroudsburg / Ed. Kailath T. Pennsylvania: Dowden, Hutchinson and Ross, Inc., 1977.
  68. Armstrong, B. Models for the relationship between ambient temperature and daily mortality. Epidemiology. 2006, 17, 624–631. [Google Scholar] [CrossRef]
  69. Nelson, C.R.; Schwert, G.W. Estimating the parameters of a distributed lag model from cross-section data: The Case of hospital admissions and discharges. Journal of the American Statistical Association 1974, 69, 627–633. [Google Scholar] [CrossRef]
  70. Gasparrini, A.; Armstrong, B.; Kenward, M.G. Distributed lag non-linear models. Stat. Med. 2010, 29, 2224–2234. [Google Scholar] [CrossRef]
  71. Karabutov, N.; Moscow, R. System with Distributed Lag: Adaptive Identification and Prediction. Int. J. Intell. Syst. Appl. 2016, 8, 1–13. [Google Scholar] [CrossRef]
  72. Fisher, I. Note on a Short-cut Method for Calculating Distributed Lags. Bulletin de l’Institut International de Statistique 1937, 29. [Google Scholar]
  73. Кoуск L.M. Distributed Lags and Investment Analysis. North-Holland Publishing Company, 1954.
  74. Solow, R. On a family of lag distributions. Econometrica. 1960, 28, 393–406. [Google Scholar] [CrossRef]
  75. Theil, H.; Stern, R.M. A simple unimodal lag distribution. Metroeconomica 1960, 12, 111–119. [Google Scholar] [CrossRef]
  76. Jorgenson, D.W. Minimum variance, linear, unbiased seasonal adjustment of economic time series. Journal of the American Statistical Association 1964, 59, 681–724. [Google Scholar] [CrossRef]
  77. Demetriou I.C., Vassiliou E. E. A distributed lag estimator with piecewise monotonic coefficients. Proceedings of the World Congress on Engineering. 2008. V. 2. WCE 2008, July 2 - 4, 2008, London, U.K. 2008. 2 July.
  78. Yoder J. Autoregressive distributed lag models. WSU Econometrics II. 2007;91-115.
  79. Karabutov, N. Structural identification of systems with distributed lag. International journal of intelligent systems and applications. 2013, 5, 1–10. [Google Scholar] [CrossRef]
  80. Karabutov, N. Structural Identification of Static Systems with Distributed Lags. Int. J. Control. Sci. Eng. 2012, 2, 136–142. [Google Scholar] [CrossRef]
  81. Kalman R., Falb P.L., Arbib M.A. Topics in mathematical system theory. McGraw-Hill, 1969.
  82. Aguirregabiria V., Mira P. Dynamic Discrete Choice Structural Models: A Survey. Working Paper 297. University of Toronto, 2007.
  83. Elgerd O.I. Control Systems Theory, New York: McGraw-Hill, 1967.
  84. Walter E. Identifiability of state space models. Berlin. Germany: Springer-Verlag. 1982. [CrossRef]
  85. Audoly, S.; D’Angio, L.; Saccomani, M.; Cobelli, C. Global identifiability of linear compartmental models-a computer algebra algorithm. IEEE Trans. Biomed. Eng. 1998, 45, 36–47. [Google Scholar] [CrossRef]
  86. Avdeenko, T.V. Identification of linear dynamical systems using concept of parametric space separators. Automation and Software Engineering 2013, 1, 16–23. [Google Scholar]
  87. Bodunov, N.A. Introduction to theory of local parametric identifiability. Differential Equations and Control Processes 2012, 1–137. [Google Scholar]
  88. Balonin, N.A. Identifiability theorems. St. Petersburg: Publishing house "Polytechnic", 2010.
  89. Handbook of the theory of automatic control. Edited by A. A. Krasovsky. Moscow: Nauka, 1987.
  90. Stigter J.D., Peeters R.L.M. On a geometric approach to the structural identifiability problem and its application in a water quality case study. Proceedings of the European Control Conference 2007 Kos, Greece, July 2-5, 2007. 2007; 3450-3456.
  91. Chis, O.-T.; Banga, J.R.; Balsa-Canto, E. Structural Identifiability of Systems Biology Models: A Critical Comparison of Methods. PLOS ONE 2011, 6, e27755. [Google Scholar] [CrossRef]
  92. Saccomani M.P., Thomaseth K. Structural vs practical identifiability of nonlinear differential equation models in systems biology. Bringing mathematics to life. In: Dynamics of mathematical models in biology. Ed. A. Rogato, V. Zazzu, M. Guarracino. Springer. 2010; 31-42.
  93. Ayvazyan S.A. (ed.), Enyukov I.S., Meshalkin L.D. Applied Statistics: Dependency Research. Reference edition, Moscow: Finansy i Statistika, 1985.
  94. Karabutov, N. Structural identification of dynamic systems with hysteresis. International journal of intelligent systems and applications. 2016, 8, 1–13. [Google Scholar] [CrossRef]
  95. Karabutov N. Structural methods of design identification systems. Nonlinearity problems, solutions and applications. V. 1. Ed. L.A. Uvarova, A. B. Nadykto, A.V. Latyshev. New York: Nova Science Publishers, Inc. 2017;233-274.
  96. Karabutov, N. Structural identification of nonlinear dynamic systems. International Journal of Intelligent Systems and Applications 2015, 7, 1–11. [Google Scholar] [CrossRef]
  97. Thamilmaran, K.; Senthilkumar, D.V.; Venkatesan, A.; Lakshmanan, M. Experimental realization of strange nonchaotic attractors in a quasiperiodically forced electronic circuit. Phys. Rev. E 2006, 74, 036205. [Google Scholar] [CrossRef] [PubMed]
  98. Porcher, R.; Thomas, G. Estimating Lyapunov exponents in biomedical time series. Phys. Rev. E 2001, 64, 010902. [Google Scholar] [CrossRef] [PubMed]
  99. Hołyst, J.A.; Urbanowicz, K. Chaos control in economical model by time-delayed feedback method. Physica A: Statistical Mechanics and Its Applications. 2000, 287, 587–598. [Google Scholar] [CrossRef]
  100. Macek, W.M.; Redaelli, S. Estimation of the entropy of the solar wind flow. Phys. Rev. E 2000, 62, 6496–6504. [Google Scholar] [CrossRef]
  101. Skokos, C. The Lyapunov Characteristic Exponents and Their Computation. Lect. Notes Phys. 2010, 790, 63–135. [Google Scholar]
  102. Gencay, R.; Dechert, W.D. An algorithm for the n Lyapunov exponents of an n-dimensional unknown dynamical system. Physica D. 1992, 59, 142–157. [Google Scholar] [CrossRef]
  103. Takens F. Detecting strange attractors in turbulence. Dynamical Systems and Turbulence. Lecture Notes in Mathematics /Eds D. A. Rand, L.-S. Young. Berlin: Springer-Verlag, 1980; 898;366–381.
  104. Wolf, A.; Swift, J.B.; Swinney, H.L.; Vastano, J.A. Determining Lyapunov exponents from a time series. Phys. D Nonlinear Phenom. 1985, 16, 285–317. [Google Scholar] [CrossRef]
  105. Rosenstein, M.T.; Collins, J.J.; De Luca, C.J. A practical method for calculating largest Lyapunov exponents from small data sets Source. Physica D. 1993, 65, 117–134. [Google Scholar] [CrossRef]
  106. Bespalov, A.V.; Polyakhov, N.D. Comparative analysis of methods for estimating first Lyapunov exponent. Modern Problems of Science and Education 2016, 6. [Google Scholar]
  107. Golovko V.A. Neural network methods of processing chaotic processes. In: Scientific session of MEPhI-2005. VII All-Russian Scientific and Technical Conference "Neuro-Informatics-2005": Lectures on neuroinformatics. Moscow: MIPhI, 2005; 43-88.
  108. Perederiy, Y.A. Method for calculation of lyapunov exponents spectrum from data series. Izvestiya VUZ Applied Nonlinear Dynamics 2012, 20, 99–104. [Google Scholar]
  109. Benettin, G.; Galgani, L.; Giorgilli, A.; Strelcyn, J.-M. Lyapunov characteristic exponents for smooth dynamical systems and for Hamiltonian systems; a method for computing all of them. Part. I: Theory. Pt. II: Numerical applications. Meccanica 1980, 15, 9–30. [Google Scholar] [CrossRef]
  110. Lyapunov A. M. General problem of motion stability. CRC Press, 1992.
  111. Dieci, L.; Russell, R.D.; Van Vleck, E.S. On the Compuation of Lyapunov Exponents for Continuous Dynamical Systems. SIAM J. Numer. Anal. 1997, 34, 402–423. [Google Scholar] [CrossRef]
  112. Filatov, V.V. Structural characteristics of anomalies of geophysical fields and their use in forecasting. Geophysics Geophysical Instrumentation 2013, 4, 34–41. [Google Scholar]
  113. Bylov, F., Vinograd, R.E., Grobman, D.M. and Nemytskii, V.V. Theory of Lyapunov exponents and its application to problems of stability. Moscow: Nauka, 1966.
  114. Karabutov, N. Structural Methods of Estimation Lyapunov Exponents Linear Dynamic System. Int. J. Intell. Syst. Appl. 2015, 7, 1–11. [Google Scholar] [CrossRef]
  115. Karabutov N.N. Structures in identification problems: Construction and analysis. Moscow: URSS. 20.
  116. Karabutov, N.N. Identifi ability and Detectability of Lyapunov Exponents for Linear Dynamical Systems. Mekhatronika, Avtom. Upr. 2022, 23, 339–350. [Google Scholar] [CrossRef]
  117. Karabutov N. Chapter 9. Identifiability and Detectability of Lyapunov Exponents in Robotics. In: Design and Control Advances in Robotics/ Ed. Mohamed Arezk Mellal. IGI Globalss Publisher of Timely Knowledge. 2023;152-174.
  118. Gagliardini P., Gouriéroux C., Renault E. Efficient Derivative Pricing by Extended Method of Moments. National Centre of Competence in Research Financial Valuation and Risk Management, 2005.
  119. Rossi, B. Optimal tests for nested model selection with underlying parameter instability. Econ. Theory 2005, 21, 962–990. [Google Scholar] [CrossRef]
  120. Giacomini R., Rossi B. Model comparisons in unstable environments. ERID Working Paper 30, Duke. 2009. [CrossRef]
  121. Magnusson L., Mavroeidis S. Identification using stability restrictions. 2012. Available online: http://econ.sciences-po.fr/sites/default/files/SCident32s.pdf.
  122. Bardsley J.M. A Bound-Constrained Levenburg-Marquardt Algorithm for a Parameter Identification Problem in Electromagnetics, 2004. Available online: http://www.math.umt.edu/bardsley/papers/EMopt04.pdf.
  123. Palanthandalam-Madapusia, H.J.; van Peltb, T.H.; Bernstein, D.S. Parameter consistency and quadratically constrained errors-in-variables least-squares identification. International Journal of Control 2010, 83, 862–877. [Google Scholar] [CrossRef]
  124. Van Pelt T.H., Bernstein D.S. Quadratically constrained least squares identification. Proceedings of the American Control Conference, Arlington, VA June 25-27. 2001. 2001;3684-3689. [CrossRef]
  125. Correa, M.; Aguirre, L.; Saldanha, R. Using steady-state prior knowledge to constrain parameter estimates in nonlinear system identification. IEEE Trans. Circuits Syst. I Regul. Pap. 2002, 49, 1376–1381. [Google Scholar] [CrossRef]
  126. Chadeev V.М., Gusev S.S. Identification with restrictions. determining a static plant parameters estimate. Proceedings of the VII International Conference “System Identification and Control Problems” SICPRO ‘OS Moscow January 28-31, 2008. V.A. Trapeznikov Institute of Control Sciences. Moscow: V.A. Trapeznikov Institute of Control Sciences. 2012; 261-269.
  127. Chia, T.L.; Chow, P.C.; Chizeck, H.J. Recursive parameter identification of constrained systems: an application to electrically stimulated muscle. IEEE Trans Biomed Eng. 1991, 38, 429–42. [Google Scholar] [CrossRef]
  128. Shi, W.-M. Parameter estimation with constraints based on variational method. J. Mar. Sci. Appl. 2010, 9, 105–108. [Google Scholar] [CrossRef]
  129. Vanli, O.A.; Del Castillo, E. Closed-Loop System Identification for Small Samples with Constraints. Technometrics 2007, 49, 382–394. [Google Scholar] [CrossRef]
  130. Hametner, C.; Jakubek, S. Nonlinear Identification with Local Model Networks Using GTLS Techniques and Equality Constraints. IEEE Trans. Neural Networks 2011, 22, 1406–1418. [Google Scholar] [CrossRef] [PubMed]
  131. Mead, J.L. ; Renaut R.A. Least squares problems with inequality constraints as quadratic constraints. Linear Algebra and Its Applications 2010, 432, 1936–1949. [Google Scholar] [CrossRef]
  132. Mazunin, V.P. ; Dvoinikov D.A. Parametric constraints in nonlinear control systems of mechanisms with elasticity. Electrical engineering 2010, 5, 9–13. [Google Scholar]
  133. Karabutov, N. Identification of parametrical restrictions in static systems in conditions of uncertainty. International journal of intelligent systems and applications. 2013, 5, 43–54. [Google Scholar] [CrossRef]
  134. Karabutov, N.N. Structural identification of a static object by processing measurement data. Meas. Tech. 2009, 52, 7–15. [Google Scholar] [CrossRef]
  135. Gabor, D.; Wiby, W.P.L.; Woodcock, R.A. A universal nonlinear filter, predictor and simulator which optimizes itself by learning processes. Proceedings of the IEEE - Part B: Electronic and Communication Engineering 1961, 108, 422–438. [Google Scholar]
  136. Ivakhnenko A.G. Long-term forecasting and management of complex systems. Kiev: Technika, 1975.
  137. Goodman, T.P.; Reswick, J.B. Determination of system characteristics from normal operating records. Trans. ASME 1956, 2, 259–271. [Google Scholar] [CrossRef]
  138. Parsen, E. Some recent advances in time series modelling. IEEE Trans. Automat. Control 1974, AC-19, 723–730. [Google Scholar] [CrossRef]
  139. Graupe, D.; Сline, W.K. Derivation of ARMA parameters and orders from pure AR models. Int. J. Syst. ScL. 1975, 10, 101–106. [Google Scholar] [CrossRef]
  140. Isermann, R.; Baur, U.; Bamberger, W.; Kneppo, P.; Siebert, H. Comparison of six on-line identification and parameter estimation methods. Automatica 1974, 10, 81–103. [Google Scholar] [CrossRef]
  141. F., Tukey J. Data analysis and regression: A second course in statistics 1st edition. Pearson, 1977.
  142. Karabutov, N.N. Frameworks application for estimation of Lyapunov exponents for systems with periodic coefficients. Mekhatronika Avtomatizatsiya, Upravlenie. 2020, 21, 3–13, (In Russ.). [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated