Papers
Topics
Authors
Recent
Search
2000 character limit reached

Deep Learning Meets Adaptive Filtering: A Stein's Unbiased Risk Estimator Approach

Published 31 Jul 2023 in eess.SP and cs.LG | (2307.16708v4)

Abstract: This paper revisits two prominent adaptive filtering algorithms, namely recursive least squares (RLS) and equivariant adaptive source separation (EASI), through the lens of algorithm unrolling. Building upon the unrolling methodology, we introduce novel task-based deep learning frameworks, denoted as Deep RLS and Deep EASI. These architectures transform the iterations of the original algorithms into layers of a deep neural network, enabling efficient source signal estimation by leveraging a training process. To further enhance performance, we propose training these deep unrolled networks utilizing a surrogate loss function grounded on Stein's unbiased risk estimator (SURE). Our empirical evaluations demonstrate that the Deep RLS and Deep EASI networks outperform their underlying algorithms. Moreover, the efficacy of SURE-based training in comparison to conventional mean squared error loss is highlighted by numerical experiments. The unleashed potential of SURE-based training in this paper sets a benchmark for future employment of SURE either for training purposes or as an evaluation metric for generalization performance of neural networks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (22)
  1. J. R. Hershey, J. L. Roux, and F. Weninger, “Deep unfolding: Model-based inspiration of novel deep architectures,” arXiv preprint arXiv:1409.2574, 2014.
  2. V. Monga, Y. Li, and Y. C. Eldar, “Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing,” arXiv preprint arXiv:1912.10557, 2019.
  3. N. Shlezinger, Y. C. Eldar, and S. P. Boyd, “Model-based deep learning: On the intersection of deep learning and optimization,” IEEE Access, vol. 10, pp. 115 384–115 398, 2022.
  4. Y. Zeng, S. Khobahi, and M. Soltanalian, “One-bit compressive sensing: Can we go deep and blind?” IEEE Signal Processing Letters, vol. 29, pp. 1629–1633, 2022.
  5. S. Khobahi, N. Shlezinger, M. Soltanalian, and Y. C. Eldar, “LoRD-Net: Unfolded deep detection network with low-resolution receivers,” IEEE Transactions on Signal Processing, vol. 69, pp. 5651–5664, 2021.
  6. V. Edupuganti, M. Mardani, S. Vasanawala, and J. Pauly, “Uncertainty quantification in deep MRI reconstruction,” IEEE Transactions on Medical Imaging, vol. 40, no. 1, pp. 239–250, 2021.
  7. C. A. Metzler, A. Mousavi, R. Heckel, and R. G. Baraniuk, “Unsupervised learning with Stein’s unbiased risk estimator,” arXiv preprint arXiv:1805.10531, 2018.
  8. M. Mardani, Q. Sun, V. Papyan, S. Vasanawala, J. Pauly, and D. Donoho, “Degrees of freedom analysis of unrolled neural networks,” arXiv preprint arXiv:1906.03742, 2019.
  9. F. Shamshad, M. Awais, M. Asim, M. Umair, A. Ahmed et al., “Leveraging deep Stein’s unbiased risk estimator for unsupervised X-ray denoising,” arXiv preprint arXiv:1811.12488, 2018.
  10. J.-F. Cardoso and B. H. Laheld, “Equivariant adaptive source separation,” IEEE Transactions on signal processing, vol. 44, no. 12, pp. 3017–3030, 1996.
  11. P. Pajunen and J. Karhunen, “Least-squares methods for blind source separation based on nonlinear PCA,” International Journal of Neural Systems, vol. 8, no. 05n06, pp. 601–612, 1997.
  12. J. Karhunen, E. Oja, L. Wang, R. Vigario, and J. Joutsensalo, “A class of neural networks for independent component analysis,” IEEE Transactions on neural networks, vol. 8, no. 3, pp. 486–504, 1997.
  13. B. Yang, “Projection approximation subspace tracking,” IEEE Transactions on Signal processing, vol. 43, no. 1, pp. 95–107, 1995.
  14. J. Karhunen and P. Pajunen, “Blind source separation using least-squares type adaptive algorithms,” in IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 4, 1997, pp. 3361–3364.
  15. L. Peng, C. Kümmerle, and R. Vidal, “On the convergence of IRLS and its variants in outlier-robust estimation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2023, pp. 17 808–17 818.
  16. A. F. Ajirlou and I. Partin-Vaisband, “A machine learning pipeline stage for adaptive frequency adjustment,” IEEE Transactions on Computers, vol. 71, no. 3, pp. 587–598, 2021.
  17. Z. Esmaeilbeig, S. Khobahi, and M. Soltanalian, “Deep-RLS: A model-inspired deep learning approach to nonlinear PCA,” arXiv e-prints, pp. arXiv–2011, 2020.
  18. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “Pytorch: An imperative style, high-performance deep learning library.”   Curran Associates, Inc., 2019, pp. 8024–8035.
  19. C. M. Stein, “Estimation of the mean of a multivariate normal distribution,” The annals of Statistics, pp. 1135–1151, 1981.
  20. Y. C. Eldar, “Generalized SURE for exponential families: Applications to regularization,” IEEE Transactions on Signal Processing, vol. 57, no. 2, pp. 471–481, 2008.
  21. P. Nobel, E. Candès, and S. Boyd, “Tractable evaluation of Stein’s unbiased risk estimator with convex regularizers,” arXiv preprint arXiv:2211.05947, 2022.
  22. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
Citations (3)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.