HARDCORE: H-field and power loss estimation for arbitrary waveforms with residual, dilated convolutional neural networks in ferrite cores
Abstract: The MagNet Challenge 2023 calls upon competitors to develop data-driven models for the material-specific, waveform-agnostic estimation of steady-state power losses in toroidal ferrite cores. The following HARDCORE (H-field and power loss estimation for Arbitrary waveforms with Residual, Dilated convolutional neural networks in ferrite COREs) approach shows that a residual convolutional neural network with physics-informed extensions can serve this task efficiently when trained on observational data beforehand. One key solution element is an intermediate model layer which first reconstructs the bh curve and then estimates the power losses based on the curve's area rendering the proposed topology physically interpretable. In addition, emphasis was placed on expert-based feature engineering and information-rich inputs in order to enable a lean model architecture. A model is trained from scratch for each material, while the topology remains the same. A Pareto-style trade-off between model size and estimation accuracy is demonstrated, which yields an optimum at as low as 1755 parameters and down to below 8\,\% for the 95-th percentile of the relative error for the worst-case material with sufficient samples.
- B. Braden, “The surveyor’s area formula,” The College Mathematics Journal, vol. 17, no. 4, pp. 326–337, 1986.
- A. Krizhevsky, I. Sulskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” in Advances in Neural Information and Processing Systems (NIPS), vol. 60, no. 6, 2012, pp. 84–90. [Online]. Available: https://doi.org/10.1145/3065386
- T. Salimans and D. P. Kingma, “Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks,” ArXiv e-prints: 1602.07868 [cs.LG], 2016. [Online]. Available: http://arxiv.org/abs/1602.07868
- P. Domingos, “A few useful things to know about machine learning,” Communications of the ACM, vol. 55, no. 10, p. 78, 2012. [Online]. Available: http://dl.acm.org/citation.cfm?doid=2347736.2347755
- S. Bai, J. Z. Kolter, and V. Koltun, “An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling,” 2018. [Online]. Available: http://arxiv.org/abs/1803.01271
- A. Paszke, S. Gross et al., “PyTorch: An Imperative Style, High-Performance Deep Learning Library,” in Advances in Neural Information Processing Systems 32. Curran Associates, Inc., 2019, pp. 8024–8035.
- A. G. Baydin, B. A. Pearlmutter et al., “Automatic differentiation in machine learning: A survey,” Journal of Machine Learning Research, vol. 18, pp. 1–43, 2018. [Online]. Available: https://arxiv.org/abs/1502.05767
- C. Tofallis, “A better measure of relative prediction accuracy for model selection and model estimation,” Journal of the Operational Research Society, vol. 66, pp. 1352–1362, 2015.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.