Papers
Topics
Authors
Recent
Search
2000 character limit reached

Generative neural networks for characteristic functions

Published 9 Jan 2024 in stat.ML, cs.LG, and stat.ME | (2401.04778v2)

Abstract: We provide a simulation algorithm to simulate from a (multivariate) characteristic function, which is only accessible in a black-box format. The method is based on a generative neural network, whose loss function exploits a specific representation of the Maximum-Mean-Discrepancy metric to directly incorporate the targeted characteristic function. The algorithm is universal in the sense that it is independent of the dimension and that it does not require any assumptions on the given characteristic function. Furthermore, finite sample guarantees on the approximation quality in terms of the Maximum-Mean Discrepancy metric are derived. The method is illustrated in a simulation study.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. Joseph Abate, Gagan L. Choudhury and Ward Whitt “Numerical inversion of multidimensional Laplace transforms by the Laguerre method” In Performance Evaluation 31.3, 1998, pp. 229–243 DOI: https://doi.org/10.1016/S0166-5316(97)00002-3
  2. “A unified framework for numerically inverting Laplace transforms” In INFORMS Journal on Computing 18.4 INFORMS, 2006, pp. 408–421 DOI: https://doi.org/10.1287/ijoc.1050.0137
  3. “The Fourier-series method for inverting transforms of probability distributions” In Queueing systems 10 Springer, 1992, pp. 5–87 DOI: https://doi.org/10.1007/BF01158520
  4. Abdul Fatir Ansari, Jonathan Scarlett and Harold Soh “A characteristic function approach to deep implicit generative modeling” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 7478–7487 URL: https://openaccess.thecvf.com/content_CVPR_2020/papers/Ansari_A_Characteristic_Function_Approach_to_Deep_Implicit_Generative_Modeling_CVPR_2020_paper.pdf
  5. Søren Asmussen and Peter W Glynn “Stochastic Simulation: Algorithms and Analysis” 57, Stochastic Modelling and Applied Probability Springer, 2007 DOI: https://doi.org/10.1007/978-0-387-69033-9
  6. “Size-noise tradeoffs in generative networks” In Advances in Neural Information Processing Systems 31, 2018 URL: https://proceedings.neurips.cc/paper_files/paper/2018/file/9bd5ee6fe55aaeb673025dbcb8f939c1-Paper.pdf
  7. “Universal methods for generating random variables with a given characteristic function” In Journal of Statistical Computation and Simulation 85.8 Taylor & Francis, 2015, pp. 1679–1691 DOI: http://dx.doi.org/10.1080/00949655.2014.892108
  8. Lorenzo Cappello and Stephen G. Walker “A Bayesian motivated Laplace inversion for multivariate probability distributions” In Methodology and Computing in Applied Probability 20 Springer, 2018, pp. 777–797 DOI: 10.1007/s11009-017-9587-y
  9. Zisheng Chen, Liming Feng and Xiong Lin “Simulating Lévy processes from their characteristic functions and financial applications” In ACM Transactions on Modeling and Computer Simulation (TOMACS) 22.3 ACM New York, NY, USA, 2012, pp. 1–26 DOI: https://doi.org/10.1145/2331140.2331142
  10. Gagan L. Choudhury, David M. Lucantoni and Ward Whitt “Multidimensional Transform Inversion with Applications to the Transient M/G/1 Queue” In The Annals of Applied Probability 4.3 Institute of Mathematical Statistics, 1994, pp. 719–740 URL: http://www.jstor.org/stable/2245060
  11. Luc Devroye “An Automatic Method for Generating Random Variates with a Given Characteristic Function” In SIAM Journal on Applied Mathematics 46.4 Society for IndustrialApplied Mathematics, 1986, pp. 698–719 URL: http://www.jstor.org/stable/2101740
  12. Luc Devroye “Nonuniform Random Variate Generation” 13, Handbooks in Operations Research and Management Science Elsevier, 2006, pp. 83–121 DOI: https://doi.org/10.1016/S0927-0507(06)13004-2
  13. Luc Devroye “On the computer generation of random variables with a given characteristic function” In Computers & Mathematics with Applications 7.6, 1981, pp. 547–552 DOI: https://doi.org/10.1016/0898-1221(81)90038-9
  14. “Sensitivity Estimates from Characteristic Functions” In Operations Research 58.6, 2010, pp. 1611–1623 DOI: 10.1287/opre.1100.0837
  15. Diederik P. Kingma and Jimmy Ba “Adam: A Method for Stochastic Optimization” In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015 URL: http://arxiv.org/abs/1412.6980
  16. “On the Ability of Neural Nets to Express Distributions” 65, Proceedings of Machine Learning Research PMLR, 2017, pp. 1271–1296 URL: https://proceedings.mlr.press/v65/lee17a.html
  17. “Neural Characteristic Function Learning for Conditional Image Generation” In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 7204–7214 URL: https://openaccess.thecvf.com/content/ICCV2023/papers/Li_Neural_Characteristic_Function_Learning_for_Conditional_Image_Generation_ICCV_2023_paper.pdf
  18. “Reciprocal Adversarial Learning via Characteristic Functions” In Advances in Neural Information Processing Systems 33 Curran Associates, Inc., 2020, pp. 217–228 URL: https://proceedings.neurips.cc/paper_files/paper/2020/file/021f6dd88a11ca489936ae770e4634ad-Paper.pdf
  19. “A Universal Approximation Theorem of Deep Neural Networks for Expressing Probability Distributions” In Advances in Neural Information Processing Systems 33 Curran Associates, Inc., 2020, pp. 3094–3105 URL: https://proceedings.neurips.cc/paper_files/paper/2020/file/2000f6325dfc4fc3201fc45ed01c7a5d-Paper.pdf
  20. Simos G Meintanis “A review of testing procedures based on the empirical characteristic function” In South African Statistical Journal 50.1 South African Statistical Association (SASA), 2016, pp. 1–14 DOI: https://hdl.handle.net/10520/EJC186846
  21. “PyTorch: An Imperative Style, High-Performance Deep Learning Library” In Advances in Neural Information Processing Systems 32 Curran Associates, Inc., 2019, pp. 8024–8035 URL: http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf
  22. Murray Rosenblatt “Remarks on a Multivariate Transformation” In The Annals of Mathematical Statistics 23.3, 1952, pp. 470–472 URL: http://www.jstor.org/stable/2236692
  23. Bharath Sriperumbudur “On the optimal estimation of probability measures in weak and strong topologies” In Bernoulli 22.3, 2016, pp. 1839–1893 DOI: 10.3150/15-BEJ713
  24. “Hilbert space embeddings and metrics on probability measures” In Journal of Machine Learning Research 11 JMLR. org, 2010, pp. 1517–1561 DOI: https://www.jmlr.org/papers/volume11/sriperumbudur10a/sriperumbudur10a.pdf
  25. Clément Dombry Thibault Modeste “Characterization of translation invariant MMD on ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT and connections with Wasserstein distances (preprint)” In HAL, 2022
  26. “Controlling Wasserstein distances by Kernel norms with application to Compressive Statistical Learning” In Journal of Machine Learning Research 24.149, 2023, pp. 1–51
  27. Yunfei Yang, Zhen Li and Yang Wang “On the capacity of deep generative networks for approximating distributions” In Neural Networks 145, 2022, pp. 144–154 DOI: https://doi.org/10.1016/j.neunet.2021.10.012
  28. Jun Yu “Empirical Characteristic Function Estimation and Its Applications” In Econometric Reviews 23.2 Taylor & Francis, 2004, pp. 93–123 DOI: 10.1081/ETC-120039605
Citations (1)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 1 like about this paper.