Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
The paper proposes an improvement of popular 'successor representation' approaches in reinforcement learning via a mechanism for maintaining and quickly updating a distribution over multiple successor maps. This innovation enables the model to adapt better to environmental changes such as different goals or reward structures. All three reviewers agree that this is a strong paper that should be accepted. I see no reason to contradict their opinion. While the reviewers were very positive, they did point out issues of clarity in the exposition, and we would like to remind the authors that their paper will reach a wider audience if they can make the presentation and explanation as clear and simple as possible in the camera ready version.