tLaSDI: Thermodynamics-informed latent space dynamics identification
Abstract: We propose a latent space dynamics identification method, namely tLaSDI, that embeds the first and second principles of thermodynamics. The latent variables are learned through an autoencoder as a nonlinear dimension reduction model. The latent dynamics are constructed by a neural network-based model that precisely preserves certain structures for the thermodynamic laws through the GENERIC formalism. An abstract error estimate is established, which provides a new loss formulation involving the Jacobian computation of autoencoder. The autoencoder and the latent dynamics are simultaneously trained to minimize the new loss. Computational examples demonstrate the effectiveness of tLaSDI, which exhibits robust generalization ability, even in extrapolation. In addition, an intriguing correlation is empirically observed between a quantity from tLaSDI in the latent space and the behaviors of the full-state solution.
- Schmidt, M., Lipson, H.: Distilling free-form natural laws from experimental data. science 324(5923), 81–85 (2009) Brunton et al. [2016] Brunton, S.L., Proctor, J.L., Kutz, J.N.: Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the national academy of sciences 113(15), 3932–3937 (2016) Raissi and Karniadakis [2018] Raissi, M., Karniadakis, G.E.: Hidden physics models: Machine learning of nonlinear partial differential equations. Journal of Computational Physics 357, 125–141 (2018) Wu and Xiu [2020] Wu, K., Xiu, D.: Data-driven deep learning of partial differential equations in modal space. Journal of Computational Physics 408, 109307 (2020) Xiao et al. [2014] Xiao, D., Fang, F., Buchan, A.G., Pain, C.C., Navon, I.M., Du, J., Hu, G.: Non-linear model reduction for the Navier–Stokes equations using residual DEIM method. Journal of Computational Physics 263, 1–18 (2014) Burkardt et al. [2006] Burkardt, J., Gunzburger, M., Lee, H.-C.: POD and CVT-based reduced-order modeling of Navier–Stokes flows. Computer Methods in Applied Mechanics and Engineering 196(1-3), 337–355 (2006) Choi and Carlberg [2019] Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Brunton, S.L., Proctor, J.L., Kutz, J.N.: Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the national academy of sciences 113(15), 3932–3937 (2016) Raissi and Karniadakis [2018] Raissi, M., Karniadakis, G.E.: Hidden physics models: Machine learning of nonlinear partial differential equations. Journal of Computational Physics 357, 125–141 (2018) Wu and Xiu [2020] Wu, K., Xiu, D.: Data-driven deep learning of partial differential equations in modal space. Journal of Computational Physics 408, 109307 (2020) Xiao et al. [2014] Xiao, D., Fang, F., Buchan, A.G., Pain, C.C., Navon, I.M., Du, J., Hu, G.: Non-linear model reduction for the Navier–Stokes equations using residual DEIM method. Journal of Computational Physics 263, 1–18 (2014) Burkardt et al. [2006] Burkardt, J., Gunzburger, M., Lee, H.-C.: POD and CVT-based reduced-order modeling of Navier–Stokes flows. Computer Methods in Applied Mechanics and Engineering 196(1-3), 337–355 (2006) Choi and Carlberg [2019] Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Raissi, M., Karniadakis, G.E.: Hidden physics models: Machine learning of nonlinear partial differential equations. Journal of Computational Physics 357, 125–141 (2018) Wu and Xiu [2020] Wu, K., Xiu, D.: Data-driven deep learning of partial differential equations in modal space. Journal of Computational Physics 408, 109307 (2020) Xiao et al. [2014] Xiao, D., Fang, F., Buchan, A.G., Pain, C.C., Navon, I.M., Du, J., Hu, G.: Non-linear model reduction for the Navier–Stokes equations using residual DEIM method. Journal of Computational Physics 263, 1–18 (2014) Burkardt et al. [2006] Burkardt, J., Gunzburger, M., Lee, H.-C.: POD and CVT-based reduced-order modeling of Navier–Stokes flows. Computer Methods in Applied Mechanics and Engineering 196(1-3), 337–355 (2006) Choi and Carlberg [2019] Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Wu, K., Xiu, D.: Data-driven deep learning of partial differential equations in modal space. Journal of Computational Physics 408, 109307 (2020) Xiao et al. [2014] Xiao, D., Fang, F., Buchan, A.G., Pain, C.C., Navon, I.M., Du, J., Hu, G.: Non-linear model reduction for the Navier–Stokes equations using residual DEIM method. Journal of Computational Physics 263, 1–18 (2014) Burkardt et al. [2006] Burkardt, J., Gunzburger, M., Lee, H.-C.: POD and CVT-based reduced-order modeling of Navier–Stokes flows. Computer Methods in Applied Mechanics and Engineering 196(1-3), 337–355 (2006) Choi and Carlberg [2019] Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Xiao, D., Fang, F., Buchan, A.G., Pain, C.C., Navon, I.M., Du, J., Hu, G.: Non-linear model reduction for the Navier–Stokes equations using residual DEIM method. Journal of Computational Physics 263, 1–18 (2014) Burkardt et al. [2006] Burkardt, J., Gunzburger, M., Lee, H.-C.: POD and CVT-based reduced-order modeling of Navier–Stokes flows. Computer Methods in Applied Mechanics and Engineering 196(1-3), 337–355 (2006) Choi and Carlberg [2019] Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Burkardt, J., Gunzburger, M., Lee, H.-C.: POD and CVT-based reduced-order modeling of Navier–Stokes flows. Computer Methods in Applied Mechanics and Engineering 196(1-3), 337–355 (2006) Choi and Carlberg [2019] Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Brunton, S.L., Proctor, J.L., Kutz, J.N.: Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the national academy of sciences 113(15), 3932–3937 (2016) Raissi and Karniadakis [2018] Raissi, M., Karniadakis, G.E.: Hidden physics models: Machine learning of nonlinear partial differential equations. Journal of Computational Physics 357, 125–141 (2018) Wu and Xiu [2020] Wu, K., Xiu, D.: Data-driven deep learning of partial differential equations in modal space. Journal of Computational Physics 408, 109307 (2020) Xiao et al. [2014] Xiao, D., Fang, F., Buchan, A.G., Pain, C.C., Navon, I.M., Du, J., Hu, G.: Non-linear model reduction for the Navier–Stokes equations using residual DEIM method. Journal of Computational Physics 263, 1–18 (2014) Burkardt et al. [2006] Burkardt, J., Gunzburger, M., Lee, H.-C.: POD and CVT-based reduced-order modeling of Navier–Stokes flows. Computer Methods in Applied Mechanics and Engineering 196(1-3), 337–355 (2006) Choi and Carlberg [2019] Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Raissi, M., Karniadakis, G.E.: Hidden physics models: Machine learning of nonlinear partial differential equations. Journal of Computational Physics 357, 125–141 (2018) Wu and Xiu [2020] Wu, K., Xiu, D.: Data-driven deep learning of partial differential equations in modal space. Journal of Computational Physics 408, 109307 (2020) Xiao et al. [2014] Xiao, D., Fang, F., Buchan, A.G., Pain, C.C., Navon, I.M., Du, J., Hu, G.: Non-linear model reduction for the Navier–Stokes equations using residual DEIM method. Journal of Computational Physics 263, 1–18 (2014) Burkardt et al. [2006] Burkardt, J., Gunzburger, M., Lee, H.-C.: POD and CVT-based reduced-order modeling of Navier–Stokes flows. Computer Methods in Applied Mechanics and Engineering 196(1-3), 337–355 (2006) Choi and Carlberg [2019] Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Wu, K., Xiu, D.: Data-driven deep learning of partial differential equations in modal space. Journal of Computational Physics 408, 109307 (2020) Xiao et al. [2014] Xiao, D., Fang, F., Buchan, A.G., Pain, C.C., Navon, I.M., Du, J., Hu, G.: Non-linear model reduction for the Navier–Stokes equations using residual DEIM method. Journal of Computational Physics 263, 1–18 (2014) Burkardt et al. [2006] Burkardt, J., Gunzburger, M., Lee, H.-C.: POD and CVT-based reduced-order modeling of Navier–Stokes flows. Computer Methods in Applied Mechanics and Engineering 196(1-3), 337–355 (2006) Choi and Carlberg [2019] Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Xiao, D., Fang, F., Buchan, A.G., Pain, C.C., Navon, I.M., Du, J., Hu, G.: Non-linear model reduction for the Navier–Stokes equations using residual DEIM method. Journal of Computational Physics 263, 1–18 (2014) Burkardt et al. [2006] Burkardt, J., Gunzburger, M., Lee, H.-C.: POD and CVT-based reduced-order modeling of Navier–Stokes flows. Computer Methods in Applied Mechanics and Engineering 196(1-3), 337–355 (2006) Choi and Carlberg [2019] Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Burkardt, J., Gunzburger, M., Lee, H.-C.: POD and CVT-based reduced-order modeling of Navier–Stokes flows. Computer Methods in Applied Mechanics and Engineering 196(1-3), 337–355 (2006) Choi and Carlberg [2019] Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Raissi, M., Karniadakis, G.E.: Hidden physics models: Machine learning of nonlinear partial differential equations. Journal of Computational Physics 357, 125–141 (2018) Wu and Xiu [2020] Wu, K., Xiu, D.: Data-driven deep learning of partial differential equations in modal space. Journal of Computational Physics 408, 109307 (2020) Xiao et al. [2014] Xiao, D., Fang, F., Buchan, A.G., Pain, C.C., Navon, I.M., Du, J., Hu, G.: Non-linear model reduction for the Navier–Stokes equations using residual DEIM method. Journal of Computational Physics 263, 1–18 (2014) Burkardt et al. [2006] Burkardt, J., Gunzburger, M., Lee, H.-C.: POD and CVT-based reduced-order modeling of Navier–Stokes flows. Computer Methods in Applied Mechanics and Engineering 196(1-3), 337–355 (2006) Choi and Carlberg [2019] Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Wu, K., Xiu, D.: Data-driven deep learning of partial differential equations in modal space. Journal of Computational Physics 408, 109307 (2020) Xiao et al. [2014] Xiao, D., Fang, F., Buchan, A.G., Pain, C.C., Navon, I.M., Du, J., Hu, G.: Non-linear model reduction for the Navier–Stokes equations using residual DEIM method. Journal of Computational Physics 263, 1–18 (2014) Burkardt et al. [2006] Burkardt, J., Gunzburger, M., Lee, H.-C.: POD and CVT-based reduced-order modeling of Navier–Stokes flows. Computer Methods in Applied Mechanics and Engineering 196(1-3), 337–355 (2006) Choi and Carlberg [2019] Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Xiao, D., Fang, F., Buchan, A.G., Pain, C.C., Navon, I.M., Du, J., Hu, G.: Non-linear model reduction for the Navier–Stokes equations using residual DEIM method. Journal of Computational Physics 263, 1–18 (2014) Burkardt et al. [2006] Burkardt, J., Gunzburger, M., Lee, H.-C.: POD and CVT-based reduced-order modeling of Navier–Stokes flows. Computer Methods in Applied Mechanics and Engineering 196(1-3), 337–355 (2006) Choi and Carlberg [2019] Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Burkardt, J., Gunzburger, M., Lee, H.-C.: POD and CVT-based reduced-order modeling of Navier–Stokes flows. Computer Methods in Applied Mechanics and Engineering 196(1-3), 337–355 (2006) Choi and Carlberg [2019] Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Wu, K., Xiu, D.: Data-driven deep learning of partial differential equations in modal space. Journal of Computational Physics 408, 109307 (2020) Xiao et al. [2014] Xiao, D., Fang, F., Buchan, A.G., Pain, C.C., Navon, I.M., Du, J., Hu, G.: Non-linear model reduction for the Navier–Stokes equations using residual DEIM method. Journal of Computational Physics 263, 1–18 (2014) Burkardt et al. [2006] Burkardt, J., Gunzburger, M., Lee, H.-C.: POD and CVT-based reduced-order modeling of Navier–Stokes flows. Computer Methods in Applied Mechanics and Engineering 196(1-3), 337–355 (2006) Choi and Carlberg [2019] Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Xiao, D., Fang, F., Buchan, A.G., Pain, C.C., Navon, I.M., Du, J., Hu, G.: Non-linear model reduction for the Navier–Stokes equations using residual DEIM method. Journal of Computational Physics 263, 1–18 (2014) Burkardt et al. [2006] Burkardt, J., Gunzburger, M., Lee, H.-C.: POD and CVT-based reduced-order modeling of Navier–Stokes flows. Computer Methods in Applied Mechanics and Engineering 196(1-3), 337–355 (2006) Choi and Carlberg [2019] Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Burkardt, J., Gunzburger, M., Lee, H.-C.: POD and CVT-based reduced-order modeling of Navier–Stokes flows. Computer Methods in Applied Mechanics and Engineering 196(1-3), 337–355 (2006) Choi and Carlberg [2019] Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Xiao, D., Fang, F., Buchan, A.G., Pain, C.C., Navon, I.M., Du, J., Hu, G.: Non-linear model reduction for the Navier–Stokes equations using residual DEIM method. Journal of Computational Physics 263, 1–18 (2014) Burkardt et al. [2006] Burkardt, J., Gunzburger, M., Lee, H.-C.: POD and CVT-based reduced-order modeling of Navier–Stokes flows. Computer Methods in Applied Mechanics and Engineering 196(1-3), 337–355 (2006) Choi and Carlberg [2019] Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Burkardt, J., Gunzburger, M., Lee, H.-C.: POD and CVT-based reduced-order modeling of Navier–Stokes flows. Computer Methods in Applied Mechanics and Engineering 196(1-3), 337–355 (2006) Choi and Carlberg [2019] Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Burkardt, J., Gunzburger, M., Lee, H.-C.: POD and CVT-based reduced-order modeling of Navier–Stokes flows. Computer Methods in Applied Mechanics and Engineering 196(1-3), 337–355 (2006) Choi and Carlberg [2019] Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Choi, Y., Carlberg, K.: Space–time least-squares Petrov–Galerkin projection for nonlinear model reduction. SIAM Journal on Scientific Computing 41(1), 26–58 (2019) Choi et al. [2020] Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Choi, Y., Coombs, D., Anderson, R.: SNS: a solution-based nonlinear subspace method for time-dependent model order reduction. SIAM Journal on Scientific Computing 42(2), 1116–1146 (2020) Carlberg et al. [2018] Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Carlberg, K., Choi, Y., Sargsyan, S.: Conservative model reduction for finite-volume models. Journal of Computational Physics 371, 280–314 (2018) Copeland et al. [2022] Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Copeland, D.M., Cheung, S.W., Huynh, K., Choi, Y.: Reduced order models for lagrangian hydrodynamics. Computer Methods in Applied Mechanics and Engineering 388, 114259 (2022) Cheung et al. [2022] Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Cheung, S.W., Choi, Y., Copeland, D.M., Huynh, K.: Local lagrangian reduced-order modeling for rayleigh-taylor instability by solution manifold decomposition. preprint arXiv:2201.07335 (2022) Zhao et al. [2014] Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Zhao, P., Liu, C., Feng, X.: POD-DEIM based model order reduction for the spherical shallow water equations with Turkel-Zwas finite difference discretization. Journal of Applied Mathematics (2014) Stefanescu and Navon [2013] Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Stefanescu, R., Navon, I.M.: POD/DEIM nonlinear model order reduction of an adi implicit shallow water equations model. Journal of Computational Physics 237, 95–114 (2013) Choi et al. [2021] Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Choi, Y., Brown, P., Arrighi, B., Anderson, R., Huynh, K.: Space-time reduced order model for large-scale linear dynamical systems with application to Boltzmann transport problems. Journal of Computational Physics 424, 109845 (2021) Schmid [2010] Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656, 5–28 (2010) Fries et al. [2022] Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Fries, W.D., He, X., Choi, Y.: LaSDI: Parametric latent space dynamics identification. Computer Methods in Applied Mechanics and Engineering 399, 115436 (2022) He et al. [2023] He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- He, X., Choi, Y., Fries, W.D., Belof, J.L., Chen, J.-S.: gLaSDI: Parametric physics-informed greedy latent space dynamics identification. Journal of Computational Physics, 112267 (2023) Bonneville et al. [2024] Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Bonneville, C., Choi, Y., Ghosh, D., Belof, J.L.: GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. Computer Methods in Applied Mechanics and Engineering 418, 116535 (2024) Lee and Carlberg [2020] Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Lee, K., Carlberg, K.T.: Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics 404, 108973 (2020) Lee and Carlberg [2021] Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Lee, K., Carlberg, K.T.: Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 277–285 (2021) Kim et al. [2020] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: Efficient nonlinear manifold reduced order model. preprint arXiv:2011.07727 (2020) Kim et al. [2022] Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Kim, Y., Choi, Y., Widemann, D., Zohdi, T.: A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 110841 (2022) Grmela and Öttinger [1997] Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Grmela, M., Öttinger, H.C.: Dynamics and thermodynamics of complex fluids. i. development of a general formalism. Physical Review E 56(6), 6620 (1997) Öttinger and Grmela [1997] Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Öttinger, H.C., Grmela, M.: Dynamics and thermodynamics of complex fluids. ii. illustrations of a general formalism. Physical Review E 56(6), 6633 (1997) Öttinger [2005] Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Öttinger, H.C.: Beyond Equilibrium Thermodynamics. John Wiley & Sons, Hoboken, NJ (2005) DeMers and Cottrell [1992] DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- DeMers, D., Cottrell, G.: Non-linear dimensionality reduction. Advances in neural information processing systems 5 (1992) Hinton and Salakhutdinov [2006] Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. science 313(5786), 504–507 (2006) Zhang et al. [2022] Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Zhang, Z., Shin, Y., Karniadakis, G.E.: GFINNs: Generic formalism informed neural networks for deterministic and stochastic dynamical systems. Philosophical Transactions of the Royal Society A 380(2229), 20210207 (2022) Öttinger [2015] Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Öttinger, H.C.: Preservation of thermodynamic structure in model reduction. Physical Review E 91(3), 032147 (2015) Yu et al. [2021] Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Yu, H., Tian, X., Weinan, E., Li, Q.: Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle. Physical Review Fluids 6(11), 114402 (2021) Chen et al. [2023] Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Chen, X., Soh, B.W., Ooi, Z.-E., Vissol-Gaudin, E., Yu, H., Novoselov, K.S., Hippalgaonkar, K., Li, Q.: Constructing custom thermodynamics using deep learning. Nature Computational Science, 1–20 (2023) Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep learning of thermodynamics-aware reduced-order models from data. Computer Methods in Applied Mechanics and Engineering 379, 113763 (2021) Moya et al. [2022] Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Moya, B., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Physics perception in sloshing scenes with guaranteed thermodynamic consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(2), 2136–2150 (2022) Masi and Stefanou [2022] Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Masi, F., Stefanou, I.: Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann). Computer Methods in Applied Mechanics and Engineering 398, 115190 (2022) Champion et al. [2019] Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Champion, K., Lusch, B., Kutz, J.N., Brunton, S.L.: Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116(45), 22445–22451 (2019) Vijayarangan et al. [2024] Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Vijayarangan, V., Uranakara, H.A., Barwey, S., Galassi, R.M., Malik, M.R., Valorani, M., Raman, V., Im, H.G.: A data-driven reduced-order model for stiff chemical kinetics using dynamics-informed training. Energy and AI 15, 100325 (2024) Cybenko [1989] Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4), 303–314 (1989) Mhaskar [1996] Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural computation 8(1), 164–177 (1996) Siegel and Xu [2020] Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Siegel, J.W., Xu, J.: Approximation rates for neural networks with general activation functions. Neural Networks 128, 313–321 (2020) Ha et al. [2017] Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: International Conference on Learning Representations (2017). https://openreview.net/forum?id=rkpACe1lx Hernandez et al. [2021] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Structure-preserving neural networks. Journal of Computational Physics 426, 109950 (2021) Lee et al. [2021] Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Lee, K., Trask, N., Stinis, P.: Machine learning structure preserving brackets for forecasting irreversible processes. Advances in Neural Information Processing Systems 34, 5696–5707 (2021) Lee et al. [2022] Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Lee, J.Y., Cho, S., Hwang, H.J.: Hyperdeeponet: learning operator with complex target function space using the limited resources via hypernetwork. In: The Eleventh International Conference on Learning Representations (2022) Conti et al. [2023] Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Conti, P., Gobat, G., Fresca, S., Manzoni, A., Frangi, A.: Reduced order modeling of parametrized systems through autoencoders and sindy approach: continuation of periodic solutions. Computer Methods in Applied Mechanics and Engineering 411, 116072 (2023) Paszke et al. [2019] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: JAX: composable transformations of python+ numpy programs (2018) Kingma and Ba [2014] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. preprint arXiv:1412.6980 (2014) Schroeder [1999] Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Schroeder, D.V.: An introduction to thermal physics. American Association of Physics Teachers (1999) Shang and Öttinger [2020] Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Shang, X., Öttinger, H.C.: Structure-preserving integrators for dissipative systems based on reversible–irreversible splitting. Proceedings of the Royal Society A 476(2234), 20190446 (2020) Ng et al. [2011] Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Ng, A., et al.: Sparse autoencoder. CS294A Lecture notes 72(2011), 1–19 (2011) Bris and Lelievre [2009] Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Bris, C.L., Lelievre, T.: Multiscale modelling of complex fluids: a mathematical initiation. Multiscale modeling and simulation in science, 49–137 (2009) Virtanen et al. [2020] Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
- Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17(3), 261–272 (2020)
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.