Download PDFOpen PDF in browserTask-oriented autonomous representation of visual inputs to facilitate robot goal achievement3 pages•Published: February 16, 2023AbstractState Representation Learning (SRL) is a field in Robotics and Artificial Intelligence that studies how to encode the observations of an environment in a way that facilitates performing specific tasks. A common approach is using autoencoders and learning to reproduce the same state from a low-dimensional representation [1, 2, 3]. Although very task-independent, this method learns to encode features that may not be relevant to the task in which the encoding will be used. An alternative is to use some elements related to the goal to achieve and/or some knowledge about the environment and the problem [1] to produce an appropriate low-dimensional encoding that captures only the relevant knowledge. In this paper, we propose an approach to autonomously obtain latent spaces of the appropriate (low) dimension that permit an efficient representation of the sensorial inputs using information about the environment and the goal. To measure the performance of this methodology, we show the results of a series of simulations of robots performing a task consisting in catching a ball in different environments. In these cases, we have found that the models required for the prediction of the final position of the ball, taking as input the learned encoding, are much simpler than those that would be required using the sensing information directly.Keyphrases: encoders, neural networks, reinforcement learning, robotics, state representation learning In: Alvaro Leitao and Lucía Ramos (editors). Proceedings of V XoveTIC Conference. XoveTIC 2022, vol 14, pages 57-59.
|