IRMA-International.org: Creator of Knowledge
Information Resources Management Association
Advancing the Concepts & Practices of Information Resources Management in Modern Organizations

Active Learning in Discrete-Time Stochastic Systems

Active Learning in Discrete-Time Stochastic Systems
View Sample PDF
Author(s): Tadeusz Banek (Lublin University of Technology, Poland)and Edward Kozlowski (Lublin University of Technology, Poland)
Copyright: 2011
Pages: 22
Source title: Knowledge-Based Intelligent System Advancements: Systemic and Cybernetic Approaches
Source Author(s)/Editor(s): Jerzy Jozefczyk (Wroclaw University of Technology, Poland)and Donat Orski (Wroclaw University of Technology, Poland)
DOI: 10.4018/978-1-61692-811-7.ch016

Purchase

View Active Learning in Discrete-Time Stochastic Systems on the publisher's website for pricing and purchasing information.

Abstract

A general approach to self-learning based on the ideas of adaptive (dual) control is presented. This means that we consider the control problem for a stochastic system with uncertainty as a leading example. Some system’s parameters are unknown and modeled as random variables with known a’priori distribution function. To optimize an objective function, a controller has to learn the system’s parameter values. The main difficulty comes from the fact that he has to optimize the objective function parallely, i.e., at the same time. Moreover, these two goals considered separately not necessarily coincide and the main problem in the adaptive control is to find the trade-off between them. Looking from the self-learning perspective the two directions are visible. The first is to extract the learning procedure from an optimal adaptive control law and to formulate it as a Cybernetic Principle of self-learning. The second is to consider a control problem with the special objective function. This function has to measure our knowledge about unknown parameters. It can be the Fisher information (Banek & Kulikowski, 2003), the joint entropy (for example Saridis, 1988; Banek & Kozlowski, 2006), or something else. This objective function in the control problem will force a controller to steer a system along trajectories that are rich in information about unknown quantities. In this chapter the authors follow the both directions. First they obtain conditions of optimality for a general adaptive control problem and resulting algorithm for computing extremal controls. The results are then applied to the simple example of the Linear Quadratic Gaussian (LQG) problem. By using analytical results and numerical simulations the authors are able to show how control actions depend on the a’piori knowledge about a system. The first conclusion is that a natural, methodological candidate for the optimal self-learning strategy, the “certainty equivalence principle”, fails to satisfy optimality conditions. Optimal control obtained in the case of perfect system’s knowledge is not directly usable in the partial information case. The need of active learning is an essential factor. The differences between controls mentioned above are visible on a level of computations and should be interpreted on a higher level of cybernetic thinking in order to give a satisfactory explanation, perhaps in the form of another principle. Under absence of the perfect knowledge of parameters values, the control actions are restricted by some measurability requirement and the authors compute the Lagrange multiplier associated with this “information constraint”. The multiplier is called a “dual” or “shadow” price and in the literature of the subject is interpreted as an incremental value of information. The authors compute the Lagrange multiptier and analyze its evolution to see how its value changes as the time goes on. As a second sort of conclusion the authors get the self-learning characteristic coming from the information theory point of view. In the last section the authors follow the second direction. In order to estimate the speed of self-learning they choose as an objective function, the conditional entropy. They state the optimal control problem for minimizing the conditional entropy of the system under consideration. Using general results obtained at the beginning, they get the conditions of optimality and the resulting algorithm for computing the extremal controls. Optimal evolution of the conditional entropy tells much about intensivity of self-learning and its time distribution.

Related Content

Man Tianxing, Vasiliy Yurievich Osipov, Ildar Raisovich Baimuratov, Natalia Alexandrovna Zhukova, Alexander Ivanovich Vodyaho, Sergey Vyacheslavovich Lebedev. © 2020. 27 pages.
Alexey Kashevnik, Nikolay Teslya. © 2020. 23 pages.
Sergey Vyacheslavovich Lebedev, Michail Panteleyev. © 2020. 26 pages.
Valentin Olenev, Yuriy Sheynin, Irina Lavrovskaya, Ilya Korobkov, Lev Kurbanov, Nadezhda Chumakova, Nikolay Sinyov. © 2020. 42 pages.
Konstantin Nedovodeev, Yuriy Sheynin, Alexey Syschikov, Boris Sedov, Vera Ivanova, Sergey Pakharev. © 2020. 34 pages.
Andrey Kuzmin, Maxim Safronov, Oleg Bodin, Victor Baranov. © 2020. 23 pages.
Alexander Yu. Meigal, Dmitry G. Korzun, Alex P. Moschevikin, Sergey Reginya, Liudmila I. Gerasimova-Meigal. © 2020. 26 pages.
Body Bottom