Papers
Topics
Authors
Recent
Search
2000 character limit reached

FourCastNet3: Geometric Weather Forecasting

Updated 1 February 2026
  • FourCastNet3 is a purely convolutional geometric ML model that leverages spherical operator blocks to forecast global weather probabilistically.
  • The model employs advanced ensemble techniques with hidden Markov processes and spectral fidelity metrics to achieve near-unity spread–skill ratios and robust calibration.
  • FCN3 delivers exceptional computational performance by generating 90-day, 0.25° global forecasts in under 20 seconds on a single GPU, enabling real-time meteorological applications.

FourCastNet3 (FCN3) is a purely convolutional, geometric machine learning model for probabilistic, global, medium- to long-range weather forecasting. It approximates the evolution operator on the sphere S2S^2, integrating spherical geometry into both model architecture and phyiscally consistent ensemble prediction. FCN3 delivers forecasting accuracy that surpasses leading conventional ensemble models and rivals diffusion-based ML methods, while achieving exceptional computational efficiency—enabling 90-day, 6-hourly global forecasts at 0.25° resolution in under 20 seconds on a single GPU. The system is robust to initialization sources and maintains spectral and ensemble skill at extended forecasts up to 60 days. These advances are realized through a cascade of spherical neural-operator blocks, scalable domain-decomposition-based training, and novel statistical ensemble generation, establishing FCN3's role as a candidate for practical meteorological and climate applications, including risk assessment and early warning systems (Bonev et al., 16 Jul 2025, Gupta et al., 25 Jan 2026).

1. Geometric Model Architecture and Spherical Operator Design

FCN3's architecture consists of three stages—an encoder, a stack of spherical neural-operator (“processor”) blocks, and a decoder. Primary inputs include atmospheric levels, surface variables, auxiliary fields, and stochastic latent noise, presented on a 0.25° equiangular latitude–longitude grid (721×1440721\times1440 points); internal computations are conducted on a coarser 360×720360\times720 Gaussian grid.

The encoder and decoder employ grouped discrete-continuous (DISCO) convolutions combined with bilinear interpolation for dimension changes, delaying channel mixing to deep layers to avoid premature blending of physically distinct variables.

The processor comprises 10 ConvNeXt-style residual blocks: 2 global (spectral) blocks and 8 local blocks. Each block contains a spherical convolution—either global (spectral) or local (DISCO)—followed by a two-layer MLP with GeLU activation and learnable LayerScale residual weights. Global convolutions invoke the convolution theorem on S2S^2:

(u ⋆ k)(x)=S2u(y)k(Rx1y)dμ(y)(u ⋆ k)(x) = \int_{S^2} u(y) \overline{k(R_x^{-1} y)} d\mu(y)

with spherical-harmonic coefficient update:

(u^ ⋆ k)m=u^mk^0.(\hat{u} ⋆ k)_{\ell m} = \hat{u}_{\ell m}\cdot\hat{k}_{\ell 0}.

Local DISCO convolutions utilize compactly supported Morlet-like wavelet basis functions to yield sparse neighborhood connectivity. Spherical harmonic transforms (SHTs) and inverse SHTs efficiently map between grid samples and truncated spherical coefficients with O(nlat2nlonlognlon)O(n_{\text{lat}}^2 n_{\text{lon}} \log n_{\text{lon}}) scaling.

Spherical power spectra are computed as:

P()=12+1m=u^m2.P(\ell) = \frac{1}{2\ell+1}\sum_{m=-\ell}^{\ell} |\hat{u}_{\ell m}|^2.

2. Probabilistic Ensemble Forecasting and Calibration

FCN3 employs a probabilistic hidden-Markov ensemble model, predicting next states as:

un+1=Fθ(un,zn,tn),u_{n+1} = F_\theta(u_n, z_n, t_n),

where znz_n represents latent noise drawn from a mixture of spherical diffusion processes:

zn+1(x)=ϕzn(x)+,mσηmYm(x),z_{n+1}(x) = \phi z_n(x) + \sum_{\ell,m} \sigma_\ell \eta_\ell^m Y_\ell^m(x),

with ηN(0,1)\eta \sim N(0,1), ϕ=eλ\phi = e^{-\lambda}, and σekT(+1)/2\sigma_\ell \propto e^{-k_T \ell(\ell+1)/2}.

Ensembles are generated by sampling znz_n for each member e=1Nense=1\ldots N_\text{ens}, yielding {un+1,e}e=1Nens\{u_{n+1,e}\}_{e=1}^{N_\text{ens}} that model p(un+1un)p(u_{n+1}|u_n). Calibration is assessed via ensemble mean, variance, spread-skill ratio, rank histograms, and Continuous Ranked Probability Score (CRPS):

CRPS(F,y)=[F(z)1yz]2dz.\text{CRPS}(F,y) = \int_{-\infty}^{\infty} [F(z)-\mathbb{1}_{y\leq z}]^2 dz.

FCN3 achieves near-unity spread–skill ratios and flat rank histograms over extended periods, with CRPS matching or outperforming state-of-the-art ML and NWP ensemble systems.

3. Scalable Distributed Training Paradigm

Training involves deterministic pretraining (MSE loss on un+1u_{n+1}), then ensemble training minimizing a composite loss combining pointwise CRPS and multiscale spectral CRPS (to enforce correct spatial power spectra). FCN3 introduces a hybrid parallelism strategy:

  • Model parallelism: domain decomposition along latitude; each GPU processes a latitude slab; distributed SHTs and DISCO ops.
  • Data parallelism: across ensemble members and batch samples; gradient sync via MPI all-reduce (data-parallel) and slabwise communication (model-parallel).
  • Implementation: all operations are defined in the Makani framework (torch-harmonics), enabling synchronous, linearly scaling training on up to 2048 H100 GPUs, with training times of 78 h (first stage, 1024 H100), 15 h (second stage, 512 A100), and 8 h (finetuning, 256 H100).

4. Forecast Skill, Spectral Fidelity, and Robustness

FCN3 achieves high quantitative skill in year-long 2020 tests at 12-h lead-time resolution. Its 50-member ensemble:

  • CRPS & RMSE: Outperforms ECMWF IFS-ENS on >90%>90\% of channels and lead times; matches GenCast’s CRPS at double the temporal resolution.
  • Calibration: Consistently flat rank histograms; spread–skill ratios near unity.
  • Spectral performance: Preserves spherical-harmonic power law slopes (–3 at synoptic, –5/3 at mesoscale) with <±20%<\pm20\% relative error up to max\ell_\text{max}.
  • Rollout stability: Extended forecasts up to 60 days exhibit no large-scale blowups; spectral slopes remain invariant.

Case studies (e.g., Storm Dennis, February 2020) confirm accurate storm track, amplitude, and spectra for up to 30 days.

FCN3's robustness to initialization from ML-based data assimilation schemes, such as HealDA, is quantitatively validated (Gupta et al., 25 Jan 2026). When initialized with HealDA rather than ERA5 analyses:

  • FCN3 loses only 12\sim12–18 h effective lead time in forecast skill.
  • Forecast error growth exponent λ\lambda remains unchanged (0.67\sim0.670.69day10.69\,\text{day}^{-1}).
  • The penalty stems mainly from larger large-scale (30\ell\lesssim30) initial errors in HealDA; small-scale variance is rapidly regenerated.
  • FCN3 is insensitive to modest IC grid smoothing, observation timing, and sensor dropout (few hour shifts in performance).

Table 1. FCN3 RMSE (Z500 in m²/s², T850 in K, U500 in m/s)

Init IC RMSE RMSE @ 1 day RMSE @ 5 days
ERA5 25; 0.34; 0.74 47; 0.77; 1.46 280; 1.62; 4.22
HealDA 47; 0.70; 1.68 67; 0.78; 1.98 325; 1.76; 4.71

The <24<24 h lead penalty with HealDA initialization suggests rapid error saturation is dominated by large-scale analysis errors, with no impact on the dynamical instability exponent.

5. Computational Efficiency and Large-Scale Applicability

FCN3 is computationally efficient, producing a 90-day, 6-hourly global forecast at 0.25° resolution for 50 ensemble members in <20<20 s on a single NVIDIA H100 GPU—about 8×8\times faster than GenCast and 60×60\times faster than IFS-ENS. Plug-and-play initialization with ML DA systems such as HealDA can be achieved in 1\lesssim1 s per analysis (Gupta et al., 25 Jan 2026).

The low inference cost, scalability to large ensembles (1000+ members), and generation of well-calibrated probabilistic outputs enable applications in rare-event risk assessment, subseasonal and in-situ forecasting, and real-time ensemble updates. Open-source frameworks (Makani, torch-harmonics) support reproducibility and community involvement.

A plausible implication is that further improvements in ML DA for weather forecasting will require reduction of large-scale analysis error—potentially achievable through enriched priors, expanded datasets, or novel network regularization targeting synoptic structures.

6. Connections to Data Assimilation and ML Forecast Ecosystem

State-of-the-art ML DA systems such as HealDA produce initial atmospheric states using only satellite and conventional observations, without reliance on NWP forecast backgrounds. FCN3 can be initialized by these analyses with only modest lead-time skill loss—a horizontal shift of 12\sim12–18 h relative to NWP-based initial states. The error-growth rate and robustness to smoothing indicate FCN3’s insensitivity to IC details, provided that large-scale structural errors are minimized.

Routine forecast verification demonstrates that scoring protocols (cycle timings, grid resolutions, sensor outages) can shift apparent skill by 3–24 h, underscoring the importance of consistent evaluation methodologies for ML weather models. FCN3’s compatibility with ML DA systems untethers ensemble prediction from expensive legacy infrastructure, reducing operational barriers for global ML-driven weather prediction (Gupta et al., 25 Jan 2026).

7. Future Directions and Implications

Advances in FCN3 and its ecosystem point toward future research on further geometric model refinement, improved spectral regularization, and advanced methods for mitigating large-scale initialization errors. The capacity for large ensembles, rapid inference, and accurate spectral representation make FCN3 a strong candidate for operational meteorology and climate systems. Continued progress in ML DA and further open-source development are expected to strengthen the reliability and utility of purely data-driven global weather models (Bonev et al., 16 Jul 2025, Gupta et al., 25 Jan 2026).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to FourCastNet3 (FCN3).