Diffusion Models for Generating Ballistic Spacecraft Trajectories
Abstract: Generative modeling has drawn much attention in creative and scientific data generation tasks. Score-based Diffusion Models, a type of generative model that iteratively learns to denoise data, have shown state-of-the-art results on tasks such as image generation, multivariate time series forecasting, and robotic trajectory planning. Using score-based diffusion models, this work implements a novel generative framework to generate ballistic transfers from Earth to Mars. We further analyze the model's ability to learn the characteristics of the original dataset and its ability to produce transfers that follow the underlying dynamics. Ablation studies were conducted to determine how model performance varies with model size and trajectory temporal resolution. In addition, a performance benchmark is designed to assess the generative model's usefulness for trajectory design, conduct model performance comparisons, and lay the groundwork for evaluating different generative models for trajectory design beyond diffusion. The results of this analysis showcase several useful properties of diffusion models that, when taken together, can enable a future system for generative trajectory design powered by diffusion models.
- A. Ramesh, P. Dhariwal, A. Nichol, C. Chu, and M. Chen, “Hierarchical Text-Conditional Image Generation with CLIP Latents,” ArXiv, Vol. abs/2204.06125, 2022.
- R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-Resolution Image Synthesis with Latent Diffusion Models,” 2022.
- M. Janner, Y. Du, J. B. Tenenbaum, and S. Levine, “Planning with Diffusion for Flexible Behavior Synthesis,” 2022.
- H. Chen, C. Lu, C. Ying, H. Su, and J. Zhu, “Offline Reinforcement Learning via High-Fidelity Generative Behavior Modeling,” 2023.
- H. Ryu, J. Kim, H. An, J. Chang, J. Seo, T. Kim, Y. Kim, C. Hwang, J. Choi, and R. Horowitz, “Diffusion-EDFs: Bi-equivariant Denoising Generative Modeling on SE(3) for Visual Robotic Manipulation,” 2023.
- X. Fang, C. R. Garrett, C. Eppner, T. Lozano-Pérez, L. P. Kaelbling, and D. Fox, “DiMSam: Diffusion Models as Samplers for Task and Motion Planning under Partial Observability,” 2023.
- S. Kim, Y. Choi, D. E. Matsunaga, and K.-E. Kim, “Stitching Sub-Trajectories with Conditional Diffusion Model for Goal-Conditioned Offline RL,” 2024.
- T. Guffanti, D. Gammelli, S. D’Amico, and M. Pavone, “Transformers for Trajectory Optimization with Application to Spacecraft Rendezvous,” 2024.
- Y. Song and S. Ermon, “Generative Modeling by Estimating Gradients of the Data Distribution,” 2020.
- K. Chwialkowski, H. Strathmann, and A. Gretton, “A Kernel Test of Goodness of Fit,” Proceedings of The 33rd International Conference on Machine Learning (M. F. Balcan and K. Q. Weinberger, eds.), Vol. 48 of Proceedings of Machine Learning Research, New York, New York, USA, PMLR, 20–22 Jun 2016, pp. 2606–2615.
- M. Welling and Y. W. Teh, “Bayesian learning via stochastic gradient langevin dynamics,” Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML’11, Madison, WI, USA, Omnipress, 2011, p. 681–688.
- D. F. Landau and J. M. Longuski, “Trajectories for Human Missions to Mars, Part I: Impulsive Transfers,” Journal of Spacecraft and Rockets, Vol. 43, No. 5, 2006, pp. 1035–1042, 10.2514/1.18995.
- D. Izzo, “Revisiting Lambert’s problem,” Celestial Mechanics and Dynamical Astronomy, Vol. 121, Oct. 2014, p. 1–15, 10.1007/s10569-014-9587-y.
- J. Sohl-Dickstein, E. A. Weiss, N. Maheswaranathan, and S. Ganguli, “Deep Unsupervised Learning using Nonequilibrium Thermodynamics,” CoRR, Vol. abs/1503.03585, 2015.
- J. Song, C. Meng, and S. Ermon, “Denoising Diffusion Implicit Models,” 2022.
- C. Meng, Y. He, Y. Song, J. Song, J. Wu, J.-Y. Zhu, and S. Ermon, “SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations,” 2022.
- C. Zhang, C. Zhang, S. Zheng, M. Zhang, M. Qamar, S.-H. Bae, and I. S. Kweon, “A Survey on Audio Diffusion Models: Text To Speech Synthesis and Enhancement in Generative AI,” 2023.
- J. Koo, S. Yoo, M. H. Nguyen, and M. Sung, “SALAD: Part-Level Latent Diffusion for 3D Shape Generation and Manipulation,” 2024.
- Y. Song and S. Ermon, “Improved Techniques for Training Score-Based Generative Models,” 2020.
- Y. Song, “Generative Modeling by Estimating Gradients of the Data Distribution — Yang Song — yang-song.net,” https://yang-song.net/blog/2021/score/. [Accessed 21-03-2024].
- T. Li, D. Katabi, and K. He, “Return of Unconditional Generation: A Self-supervised Representation Generation Method,” 2024.
- D. P. Kingma and M. Welling, “Auto-Encoding Variational Bayes,” 2022.
- I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” Advances in Neural Information Processing Systems (Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Weinberger, eds.), Vol. 27, Curran Associates, Inc., 2014.
- P. Vincent, “A Connection Between Score Matching and Denoising Autoencoders,” Neural Computation, Vol. 23, 07 2011, pp. 1661–1674, 10.1162/NECO_a_00142.
- O. Johnson and A. Barron, “Fisher information inequalities and the central limit theorem,” Probability Theory and Related Fields, Vol. 129, Apr. 2004, p. 391–409, 10.1007/s00440-004-0344-0.
- A. Hyvärinen, “Estimation of Non-Normalized Statistical Models by Score Matching.,” Journal of Machine Learning Research, Vol. 6, 01 2005, pp. 695–709.
- C. Tsitouras, I. Famelis, and T. Simos, “Phase-fitted Runge–Kutta pairs of orders 8(7),” Journal of Computational and Applied Mathematics, Vol. 321, 2017, pp. 226–231.
- A. Krizhevsky, V. Nair, and G. Hinton, “CIFAR-10 (Canadian Institute for Advanced Research),”
- L. Deng, “The mnist database of handwritten digit images for machine learning research,” IEEE Signal Processing Magazine, Vol. 29, No. 6, 2012, pp. 141–142.
- Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep Learning Face Attributes in the Wild,” Proceedings of International Conference on Computer Vision (ICCV), December 2015.
- M. Seitzer, “pytorch-fid: FID Score for PyTorch,” https://github.com/mseitzer/pytorch-fid, August 2020. Version 0.3.0.
- G. Lin, A. Milan, C. Shen, and I. Reid, “RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation,” 2016.
- O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” 2015.
- N. L. O. Parrish, “Low thrust trajectory optimization in cislunar and translunar space,” Nov 2019.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.