Fast Prototype Framework for Deep Reinforcement Learning-based Trajectory Planner

Authors

  • Árpád Fehér
    Affiliation

    Department of Control for Transportation and Vehicle Systems, Faculty of Transportation Engineering and Vehicle Engineering, Budapest University of Technology and Economics, H-1111 Budapest, Műegyetem rkp. 3., Hungary

  • Szilárd Aradi
    Affiliation

    Department of Control for Transportation and Vehicle Systems, Faculty of Transportation Engineering and Vehicle Engineering, Budapest University of Technology and Economics, H-1111 Budapest, Műegyetem rkp. 3., Hungary

  • Tamás Bécsi
    Affiliation

    Department of Control for Transportation and Vehicle Systems, Faculty of Transportation Engineering and Vehicle Engineering, Budapest University of Technology and Economics, H-1111 Budapest, Műegyetem rkp. 3., Hungary

https://doi.org/10.3311/PPtr.15837

Abstract

Reinforcement Learning, as one of the main approaches of machine learning, has been gaining high popularity in recent years, which also affects the vehicle industry and research focusing on automated driving. However, these techniques, due to their self-training approach, have high computational resource requirements. Their development can be separated into training with simulation, validation through vehicle dynamics software, and real-world tests. However, ensuring portability of the designed algorithms between these levels is difficult. A case study is also given to provide better insight into the development process, in which an online trajectory planner is trained and evaluated in both vehicle simulation and real-world environments.

Keywords:

motion planning, reinforcement learning, testing, development framework

Published Online

2020-06-29

How to Cite

Fehér, Árpád, Aradi, S., Bécsi, T. (2020) “Fast Prototype Framework for Deep Reinforcement Learning-based Trajectory Planner”, Periodica Polytechnica Transportation Engineering, 48(4), pp. 307–312. https://doi.org/10.3311/PPtr.15837

Issue

Section

Articles