2000 character limit reached
State representation learning with recurrent capsule networks
Published 28 Dec 2018 in cs.LG and cs.NE | (1812.11202v4)
Abstract: Unsupervised learning of compact and relevant state representations has been proved very useful at solving complex reinforcement learning tasks. In this paper, we propose a recurrent capsule network that learns such representations by trying to predict the future observations in an agent's trajectory.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.