TSER: Time Series Entity Resampler
- TSER is a two-stage bootstrap resampling method using GANs and TCNs to learn and replicate the full distribution and serial dependence in stationary time series.
- It outperforms traditional circular block bootstrap methods, particularly in AR(1) settings, by providing competitive empirical coverage across various nominal levels.
- Leveraging adversarial training and a modular TCN design, TSER allows efficient generation of synthetic series without block stitching or post-processing adjustments.
The Time Series Entity Resampler (TSER) is a two-stage bootstrap methodology for dependent data based on generative adversarial networks (GANs), designed to learn and replicate the full distributional and serial dependence structure of stationary time series processes. TSER leverages temporal convolutional networks (TCNs) for both generator and discriminator networks, enabling direct sampling of realistic synthetic time series without reliance on traditional block-based resampling schemes. The method has demonstrated competitive to superior empirical coverage properties relative to circular block bootstrap (CBB) in autoregressive (AR(1)) settings, particularly at moderate levels of temporal dependence (Dahl et al., 2021).
1. Formal Model and Adversarial Objective
TSER is instantiated in the context of stationary AR(1) processes, though the underlying design is modular in its model applicability. For observed data with and , TSER addresses the resampling problem by adopting an adversarial generative framework.
Let denote a neural network generator that maps Gaussian noise vectors to synthetic data vectors and a discriminator mapping sequences to scalar outputs. The standard GAN min-max problem is given by
For stability and improved empirical performance, TSER employs the Wasserstein-GAN with gradient penalty (WGAN-GP), solving the following alternating maximization and minimization:
- Discriminator (D-step):
with
- Generator (G-step):
The gradient-penalty weight enforces the necessary Lipschitz constraint; batch normalization is not used.
2. Temporal Convolutional Network Architectures
Both generator and discriminator in TSER are constructed from one-dimensional causal dilated convolutional layers, forming a TCN architecture with receptive field and depth tailored to capture temporal dependencies.
- Generator : Accepts noise with i.i.d. standard normal entries as input. The architecture includes 6 dilated convolutional layers with filters (128, 64, 32, 32, 16, 1), kernel size 2, and exponential dilations . All but the final layer use tanh activations; output layer is linear, producing a synthetic block .
- Discriminator : Receives real or synthetic blocks ; utilizes 6 dilated convolutional layers (filters: 8, 16, 32, 32, 64, 64), kernel size 2, matching generator dilations. Leaky-ReLU activations are used in each layer; layer outputs are aggregated using adaptive max pooling to form a feature vector, input to two fully-connected layers yielding the scalar WGAN-GP score .
The receptive field is for layers, controlling maximum temporal range.
3. Two-Stage TSER Resampling Algorithm
The TSER methodology proceeds through two principal stages: adversarial training and bootstrap sampling.
Stage 1: GAN Training
- Sample mini-batches of real blocks from the observed series of block length .
- Independently generate fake blocks by sampling noise and evaluating .
- Update the discriminator multiple times per iteration using the WGAN-GP loss with gradient penalty.
- Alternate with generator updates based on the negative expected discriminator output on generated blocks.
Stage 2: Bootstrap Sampling
- After training, fix .
- For each bootstrap replicate, draw noise and generate a synthetic series of target length .
- Compute bootstrap statistics (e.g., least-squares AR(1) coefficient).
- Aggregate the bootstrap estimates to obtain percentile confidence intervals.
No detrending, variance scaling, or other post-processing is required; stationarity and dependence structure are preserved by design (Dahl et al., 2021).
4. Empirical Evaluation Against Circular Block Bootstrap
TSER’s performance is benchmarked for empirical coverage accuracy against CBB in AR(1) settings:
- Data are simulated from AR(1) processes with , , .
- GAN block length , mini-batch size , synthetic series, independent DGP draws.
- Empirical coverage of percentile confidence intervals for at nominal levels (80%, 90%, 95%, 99%) is evaluated.
Key results:
- For , TSER delivers empirical coverage at all nominal levels closer to target values than CBB, robust to CBB block size choices.
- TSER reliably replicates sample ACF and PACF structure of the AR(1), unlike CBB, which exhibits under-dispersion at higher .
- For , both methods experience degraded performance, but TSER with increased TCN depth remains competitive or superior.
5. Strengths, Limitations, and Extensions
Strengths
- TSER learns and replicates both marginal and serial dependence structures, not limited to blockwise overlap.
- Once trained, the generator can produce an arbitrary number of long synthetic trajectories without block stitching.
- Requires selection of a single TCN architecture and training block length, avoiding the need for CBB block size tuning.
Limitations
- GAN training is computationally intensive, requiring significantly more time per replication than CBB.
- Multiple GAN and TCN hyperparameters demand tuning.
- Lack of general asymptotic theory for the validity of the TSER bootstrap in settings with complex dependence.
- Limited ability to capture very long-range dependence (near-unit-root processes) unless receptive field is enlarged; risk of mode collapse.
Possible Extensions
- Multivariate TSER for vector autoregressions (VAR) by broadening convolutional filters.
- Integration of conditional GANs for exogenous covariates or regime-switching dynamics.
- Deeper or wider TCNs to accommodate ARMA(,) models or fractional processes.
- Embedding normalizing flows or GARCH-style layers for heavy-tailed marginal distributions.
- Theoretical investigation of convergence properties under mixing conditions.
6. Context and Related Methodologies
TSER represents a departure from traditional bootstrap and surrogate data techniques for time series by leveraging adversarial training to directly match full joint distributions. The use of WGAN-GP (Arjovsky et al. 2017; Gulrajani et al. 2017) and TCN architectures enables explicit modeling of stationary dependence structures. Compared to block bootstrap methods, which are constrained by block length tuning and blockwise structural assumptions, TSER provides a model-agnostic framework that internalizes temporal relationships, subject to practical resource and hyperparameter considerations (Dahl et al., 2021).
A plausible implication is that further advances in GAN theory and architecture may directly expand the range of stationary and non-stationary processes amenable to GAN-based bootstrap resampling, particularly with extensions for multivariate, regime-switching, and long-memory contexts.