When does Self-Prediction help? Understanding Auxiliary Tasks in Reinforcement Learning

Abstract

We investigate the impact of auxiliary learning tasks such as observation reconstruction and latent self-prediction on the representation learning problem in reinforcement learning. We also study how they interact with distractions and observation functions in the MDP. We provide a theoretical analysis of the learning dynamics of observation reconstruction, latent self-prediction, and TD learning in the presence of distractions and observation functions under linear model assumptions. With this formalization, we are able to explain why latent-self prediction is a helpful mph{auxiliary task}, while observation reconstruction can provide more useful features when used in isolation. Our empirical analysis shows that the insights obtained from our learning dynamics framework predicts the behavior of these loss functions beyond the linear model assumption in non-linear neural networks. This reinforces the usefulness of the linear model framework not only for theoretical analysis, but also practical benefit for applied problems.

Publication
Reinforcement Learning Conference (RLC)

Toronto Intelligent Systems Lab Co-authors

Claas Voelcker
Claas Voelcker
PhD Student

My research focusses on task-aligned and value-aware model learning for reinforcement learning and control. My research focuses on agents learning world models which are correct where it matters, meaning they can adapt their losses to the task at hand.

Igor Gilitschenski
Igor Gilitschenski
Assistant Professor