Generative replay effectiveness on complex input distributions
Determine whether generative replay, implemented by training a separate generative model to produce pseudo-inputs for past tasks and replaying these samples during training, maintains strong performance in continual learning when the tasks’ input distributions are substantially more complex than MNIST (e.g., natural images), rather than simple handwritten-digit data.
References
It therefore remains an open question whether generative replay will still be so successful for task protocols with more complicated input distributions.
— Three scenarios for continual learning
(1904.07734 - Ven et al., 2019) in Section 6 (Discussion)