Safe Robust Framework for Reinforcement Learning-based Control of Indoor Vehicles
Abstract
The paper presents the design of a safe data-aided steering control for indoor vehicles using the robust supervisory framework. The goal of the method is to achieve the combination of the effective motion with reinforcement learning (RL) based control and the guaranteed safe motion with robust control. The RL-based control through the Proximal Policy Optimization (PPO) method is designed in which actor and critic agents are used. The supervisory robust control is selected in the form with which robust stability against an additional input disturbance can be guaranteed. The effectiveness of the combination through simulations and experimental test scenarios is illustrated. For test purposes, an F1TENTH type small-scaled test vehicle is used, whose lap time is minimized through the proposed control system.