Papers
Topics
Authors
Recent
Search
2000 character limit reached

Accelerating Large-Scale-Structure data analyses by emulating Boltzmann solvers and Lagrangian Perturbation Theory

Published 29 Apr 2021 in astro-ph.CO | (2104.14568v3)

Abstract: The linear matter power spectrum is an essential ingredient in all theoretical models for interpreting large-scale-structure observables. Although Boltzmann codes such as CLASS or CAMB are very efficient at computing the linear spectrum, the analysis of data usually requires $104$-$106$ evaluations, which means this task can be the most computationally expensive aspect of data analysis. Here, we address this problem by building a neural network emulator that provides the linear theory (total and cold) matter power spectrum in about one millisecond with 0.2% (0.5%) accuracy over redshifts $z \le 3$ ($z \le 9$), and scales $10{-4} \le k \, [h {\rm Mpc{-1}}] < 50$. We train this emulator with more than 200,000 measurements, spanning a broad cosmological parameter space that includes massive neutrinos and dynamical dark energy. We show that the parameter range and accuracy of our emulator is enough to get unbiased cosmological constraints in the analysis of a Euclid-like weak lensing survey. Complementing this emulator, we train 15 other emulators for the cross-spectra of various linear fields in Eulerian space, as predicted by 2nd-order Lagrangian Perturbation theory, which can be used to accelerate perturbative bias descriptions of galaxy clustering. Our emulators are specially designed to be used in combination with emulators for the nonlinear matter power spectrum and for baryonic effects, all of which are publicly available at http://www.dipc.org/bacco.

Citations (22)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.