Abstract :
[en] Context. Standard cosmological analysis, which is based on two-point statistics, fails to extract all the information embedded in the cosmological data. This limits our ability to precisely constrain cosmological parameters. Through willingness to use modern analysis techniques to match the power of upcoming telescopes, recent years have seen a paradigm shift from analytical likelihood-based to simulation-based inference. However, such methods require a large number of costly simulations. Aims. We focused on full-field inference, which is considered the optimal form of inference as it enables the recovery of cosmological constraints from simulations without any loss of cosmological information. Our objective is to review and benchmark several ways of conducting full-field inference to gain insight into the number of simulations required for each method. Specifically, we made a distinction between explicit inference methods that require an explicit form of the likelihood, such that it can be evaluated and thus sampled through sampling schemes and implicit inference methods that can be used when only an implicit version of the likelihood is available through simulations. Moreover, it is crucial for explicit full-field inference to use a differentiable forward model. Similarly, we aim to discuss the advantages of having differentiable forward models for implicit full-field inference. Methods. We used the sbi_lens package (https://github.com/DifferentiableUniverseInitiative/sbi_lens), which provides a fast and differentiable log-normal forward model to generate convergence maps mimicking a simplified version of LSST Y10 quality. While the analyses use a simplified forward model, the goal is to illustrate key methodologies and their implications. Specifically, this fast-forward model enables us to compare explicit and implicit full-field inference with and without gradient. The former is achieved by sampling the forward model through the No U-Turns (NUTS) sampler. The latter starts by compressing the data into sufficient statistics and uses the neural likelihood estimation (NLE) algorithm and the one augmented with gradient (δ NLE) to learn the likelihood distribution and then sample the posterior distribution. Results. We performed a full-field analysis on LSST Y10-like weak-lensing-simulated log-normal convergence maps, where we constrain (Ωc,Ωb,σ8,h0,ns,w0). We demonstrate that explicit full-field and implicit full-field inference yield consistent constraints. Explicit full-field inference requires 630 000 simulations with our particular sampler, which corresponds to 400 independent samples. Implicit full-field inference requires a maximum of 101 000 simulations split into 100 000 simulations to build neural-based sufficient statistics (this number of simulations is not fine-tuned) and 1000 simulations to perform inference using implicit inference. Additionally, while differentiability is very useful for explicit full-field inference, we show that, for this specific case, our way of exploiting the gradients does not help implicit full-field inference significantly.
Funding text :
This paper has undergone internal review in the LSST Dark Energy Science Collaboration. The authors would like to express their sincere gratitude to the internal reviewers, Alan Heavens and Adrian Bayer, for their valuable feedback, insightful comments, and suggestions, which helped to significantly improve the quality of this work. They also extend their thanks to Benjamin Remy for his contributions through countless discussions and helpful comments on the paper. Additionally, they appreciate the constructive feedback provided by Martin Kilbinger and Sacha Guerrini. JZ led the project, contributed to brainstorming, developed the code, and wrote the paper. DL contributed to brainstorming and code development, particularly in developing the forward model, and reviewed the paper. FL initiated the project and contributed through mentoring, brainstorming, code development, and paper reviews. AB contributed mentoring, brainstorming, code development, and paper reviews. GL and EA provided mentoring, participated in brainstorming, and contributed to reviewing the paper. AB contributed to the review of the paper and participated in brainstorming the metric used for explicit inference. The DESC acknowledges ongoing support from the Institut National de Physique Nucl\u00E9aire et de Physique des Particules in France; the Science & Technology Facilities Council in the United Kingdom; and the Department of Energy, the National Science Foundation, and the LSST Corporation in the United States. DESC uses resources of the IN2P3 Computing Center (CC-IN2P3\u2013Lyon/Villeurbanne - France) funded by the Centre National de la Recherche Scientifique; the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231; STFC DiRAC HPC Facilities, funded by UK BEIS National E-infrastructure capital grants; and the UK particle physics grid, supported by the GridPP Collaboration. This work was performed in part under DOE Contract DE-AC02-76SF00515. This work was supported by the Data Intelligence Institute of Paris (diiP), and IdEx Universit\u00E9 de Paris (ANR-18-IDEX-0001). This work was granted access to the HPC/AI resources of IDRIS under the allocations 2023-AD010414029 and AD011014029R1 made by GENCI. This work used the following packages: Numpy (Harris et al. 2020), NumPyro (Phan et al. 2019), JAX (Bradbury et al. 2018), Haiku (Hennigan et al. 2020), Optax (DeepMind et al. 2020), JAX-COSMO (Campagne et al. 2023), GetDist (Lewis 2019), Matplotlib (Hunter 2007), CosMomentum (Friedrich et al. 2020), scikit-learn (Pedregosa et al. 2011), TensorFlow (Abadi et al. 2016), TensorFlow Probability (Dillon et al. 2017), sbi (Tejero-Cantero et al. 2020) and sbibm (Lueckmann et al. 2021).This paper has undergone internal review in the LSST Dark Energy Science Collaboration. The authors would like to express their sincere gratitude to the internal reviewers, Alan Heavens and Adrian Bayer, for their valuable feedback, insightful comments, and suggestions, which helped to significantly improve the quality of this work. They also extend their thanks to Benjamin Remy for his contributions through countless discussions and helpful comments on the paper. Additionally, they appreciate the constructive feedback provided by Martin Kilbinger and Sacha Guerrini. JZ led the project, contributed to brainstorming, developed the code, and wrote the paper. DL contributed to brainstorming and code development, particularly in developing the forward model, and reviewed the paper. FL initiated the project and contributed through mentoring, brainstorming, code development, and paper reviews. AB contributed mentoring, brainstorming, code development, and paper reviews. GL and EA provided mentoring, participated in brainstorming, and contributed to reviewing the paper. AB contributed to the review of the paper and participated in brainstorming the metric used for explicit inference. The DESC acknowledges ongoing support from the Institut National de Physique Nucl\u00E9aire et de Physique des Particules in France; the Science & Technology Facilities Council in the United Kingdom; and the Department of Energy, the National Science Foundation, and the LSST Corporation in the United States. DESC uses resources of the IN2P3 Computing Center (CC-IN2P3-Lyon/Villeurbanne - France) funded by the Centre National de la Recherche Scientifique; the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DEAC02-05CH11231; STFC DiRAC HPC Facilities, funded by UK BEIS National E-infrastructure capital grants; and the UK particle physics grid, supported by the GridPP Collaboration. This work was performed in part under DOE Contract DE-AC02-76SF00515. This work was supported by the Data Intelligence Institute of Paris (diiP), and IdEx Universit\u00E9 de Paris (ANR-18-IDEX-0001). This work was granted access to the HPC/AI resources of IDRIS under the allocations 2023-AD010414029 and AD011014029R1 made by GENCI. This work used the following packages: Numpy (Harris et al. 2020), NumPyro (Phan et al. 2019), JAX (Bradbury et al. 2018), Haiku (Hennigan et al. 2020), Optax (DeepMind et al. 2020), JAX-COSMO (Campagne et al. 2023), GetDist (Lewis 2019), Matplotlib (Hunter 2007), CosMomentum (Friedrich et al. 2020), scikit-learn (Pedregosa et al. 2011), TensorFlow (Abadi et al. 2016), TensorFlow Probability (Dillon et al. 2017), sbi (Tejero-Cantero et al. 2020) and sbibm (Lueckmann et al. 2021).
Scopus citations®
without self-citations
3