Version 1
: Received: 24 June 2024 / Approved: 24 June 2024 / Online: 24 June 2024 (12:04:10 CEST)
How to cite:
Gao, G.; Jardin, P.; Rinderknecht, S. Linear Quadratic Tracking Control of Car-in-the-Loop Test Bench using Model learnt via Bayesian Optimization. Preprints2024, 2024061681. https://doi.org/10.20944/preprints202406.1681.v1
Gao, G.; Jardin, P.; Rinderknecht, S. Linear Quadratic Tracking Control of Car-in-the-Loop Test Bench using Model learnt via Bayesian Optimization. Preprints 2024, 2024061681. https://doi.org/10.20944/preprints202406.1681.v1
Gao, G.; Jardin, P.; Rinderknecht, S. Linear Quadratic Tracking Control of Car-in-the-Loop Test Bench using Model learnt via Bayesian Optimization. Preprints2024, 2024061681. https://doi.org/10.20944/preprints202406.1681.v1
APA Style
Gao, G., Jardin, P., & Rinderknecht, S. (2024). Linear Quadratic Tracking Control of Car-in-the-Loop Test Bench using Model learnt via Bayesian Optimization. Preprints. https://doi.org/10.20944/preprints202406.1681.v1
Chicago/Turabian Style
Gao, G., Philippe Jardin and Stephan Rinderknecht. 2024 "Linear Quadratic Tracking Control of Car-in-the-Loop Test Bench using Model learnt via Bayesian Optimization" Preprints. https://doi.org/10.20944/preprints202406.1681.v1
Abstract
In this paper, we introduce a control method for the linear quadratic tracking (LQT) problem with zero steady-state error. It is achieved by augmenting the original system with an additional state representing the integrated error between the reference and actual outputs. In its essence, it is a linear quadratic integral (LQI) control embedded in a general LQT control framework, with the reference trajectory generated by a linear exogenous system. During the simulative implementation for the specific real-world system Car-in-the-Loop (CiL) test bench, we assume that the 'real' system is completely known. Therefore, for the model-based control, we can have a perfect model identical to the 'real' system. It becomes clear that stable solutions can scarcely be achieved with controller designed with the perfect model of the 'real' system. Contrary, we show that a model learnt via Bayesian Optimization (BO) can facilitate a much bigger set of stable controllers. It exhibits an improved control performance. To the best of the authors' knowledge, this discovery is the first in the LQT related literature.
Keywords
Bayesian Optimization; Linear quadratic tracking; zero steady-state error; model learning; Car-in-the-Loop test bench
Subject
Engineering, Mechanical Engineering
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.