Control of Dynamical Systems on a Final Interval of Time by Means of Stability Theory Methods
Igor M. Ananievski
The paper is devoted to the stability theory based methods of designing feedback control for dynamical systems under uncertainty.
A lot of approaches to designing control for dynamical systems with uncertain parameters are based on the stability theory
and consist in constructing regimes that ensure the asymptotic stability of the desired motion (in particular, the terminal state) of the system. In contrast to these approaches, we are searching for the control laws that may be used to steer
a system to a terminal state in a finite time.
In the present paper, two approaches to constructing feedback control algorithms are discussed. The first approach can be applied to linear systems, while the second one has been elaborated for Lagrangian mechanical systems. Both of these approaches are based on the Lyapunov direct method and enables one to steer the system to a given terminal state
in a finite time under the assumption that the control variables are bounded and the system is subject to unknown perturbations. The peculiarity of the investigation is that
the Lyapunov functions are defined implicitly in both cases.
The control algorithms under consideration employ linear feedbacks with the gains that are functions of the phase variables. The gains increase and tend to infinity as the trajectory approaches the terminal state; nevertheless, the control forces are bounded and meet the imposed constraint.
To compare the efficiency of the proposed control algorithms a computer simulation of the controlled motion of a double pendulum in a neighbourhood of the upper equilibrium state is presented.