Improving Reinforcement Learning Exploration by Autoencoders

Authors

  • Gabor Paczolay
    Affiliation

    Department of Control Engineering, Budapest University of Technology and Economics, Magyar Tudósok körútja 2., H-1117 Budapest, Hungary

  • Istvan Harmati
    Affiliation

    Department of Control Engineering, Budapest University of Technology and Economics, Magyar Tudósok körútja 2., H-1117 Budapest, Hungary

https://doi.org/10.3311/PPee.36789

Abstract

Reinforcement learning is a field with massive potential related to solving engineering problems without field knowledge. However, the problem of exploration and exploitation emerges when one tries to balance a system between the learning phase and proper execution. In this paper, a new method is proposed that utilizes autoencoders to manage the exploration rate in an epsilon-greedy exploration algorithm. The error between the real state and the reconstructed state by the autoencoder becomes the base of the exploration-exploitation rate. The proposed method is then examined in two experiments: one benchmark is the cartpole experiment while the other is a gridworld example created for this paper to examine long-term exploration. Both experiments show results such that the proposed method performs better in these scenarios.

Keywords:

reinforcement learning, DQN, autoencoders, exploration, AutE-DQN

Citation data from Crossref and Scopus

Published Online

2024-07-01

How to Cite

Paczolay, G., Harmati, I. “Improving Reinforcement Learning Exploration by Autoencoders”, Periodica Polytechnica Electrical Engineering and Computer Science, 2024. https://doi.org/10.3311/PPee.36789

Issue

Section

Articles