Generative replay effectiveness on complex input distributions

Determine whether generative replay, implemented by training a separate generative model to produce pseudo-inputs for past tasks and replaying these samples during training, maintains strong performance in continual learning when the tasks’ input distributions are substantially more complex than MNIST (e.g., natural images), rather than simple handwritten-digit data.

Background

The paper evaluates several continual learning methods across three scenarios—task-incremental, domain-incremental, and class-incremental—using split MNIST and permuted MNIST. Replay-based approaches, including generative replay, are found to perform well, particularly when task identity must be inferred.

However, the authors note a limitation: MNIST images are relatively easy to generate. This raises uncertainty about whether generative replay will retain its effectiveness for task protocols involving more complex input distributions, such as those from natural-image datasets.

References

It therefore remains an open question whether generative replay will still be so successful for task protocols with more complicated input distributions.

Three scenarios for continual learning  (1904.07734 - Ven et al., 2019) in Section 6 (Discussion)