Tackling the Zero-Shot Reinforcement Learning Loss Directly
Abstract: Zero-shot reinforcement learning (RL) methods aim at instantly producing a behavior for an RL task in a given environment, from a description of the reward function. These methods are usually tested by evaluating their average performance on a series of downstream tasks. Yet they cannot be trained directly for that objective, unless the distribution of downstream tasks is known. Existing approaches either use other learning criteria [BBQ+ 18, TRO23, TO21, HDB+ 19], or explicitly set a prior on downstream tasks, such as reward functions given by a random neural network [FPAL24]. Here we prove that the zero-shot RL loss can be optimized directly, for a range of non-informative priors such as white noise rewards, temporally smooth rewards, ``scattered'' sparse rewards, or a combination of those. Thus, it is possible to learn the optimal zero-shot features algorithmically, for a wide mixture of priors. Surprisingly, the white noise prior leads to an objective almost identical to the one in VISR [HDB+19], via a different approach. This shows that some seemingly arbitrary choices in VISR, such as Von Mises--Fisher distributions, do maximize downstream performance. This also suggests more efficient ways to tackle the VISR objective. Finally, we discuss some consequences and limitations of the zero-shot RL objective, such as its tendency to produce narrow optimal features if only using Gaussian dense reward priors.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.