Papers
Topics
Authors
Recent
Search
2000 character limit reached

CT-SDM: A Sampling Diffusion Model for Sparse-View CT Reconstruction across All Sampling Rates

Published 3 Sep 2024 in cs.CV | (2409.01571v1)

Abstract: Sparse views X-ray computed tomography has emerged as a contemporary technique to mitigate radiation dose. Because of the reduced number of projection views, traditional reconstruction methods can lead to severe artifacts. Recently, research studies utilizing deep learning methods has made promising progress in removing artifacts for Sparse-View Computed Tomography (SVCT). However, given the limitations on the generalization capability of deep learning models, current methods usually train models on fixed sampling rates, affecting the usability and flexibility of model deployment in real clinical settings. To address this issue, our study proposes a adaptive reconstruction method to achieve high-performance SVCT reconstruction at any sampling rate. Specifically, we design a novel imaging degradation operator in the proposed sampling diffusion model for SVCT (CT-SDM) to simulate the projection process in the sinogram domain. Thus, the CT-SDM can gradually add projection views to highly undersampled measurements to generalize the full-view sinograms. By choosing an appropriate starting point in diffusion inference, the proposed model can recover the full-view sinograms from any sampling rate with only one trained model. Experiments on several datasets have verified the effectiveness and robustness of our approach, demonstrating its superiority in reconstructing high-quality images from sparse-view CT scans across various sampling rates.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (47)
  1. N. B. Shah and S. L. Platt, “Alara: is there a cause for alarm? reducing radiation risks from computed tomography scanning in children,” Current opinion in pediatrics, vol. 20, no. 3, pp. 243–247, 2008.
  2. K. Kim, J. C. Ye, W. Worstell, J. Ouyang, Y. Rakvongthai, G. El Fakhri, and Q. Li, “Sparse-view spectral CT reconstruction using spectral patch-based low-rank penalty,” IEEE Transactions on Medical Imaging, vol. 34, no. 3, pp. 748–760, 2014.
  3. T. Lee, C. Lee, J. Baek, and S. Cho, “Moving beam-blocker-based low-dose cone-beam CT,” IEEE Transactions on Nuclear Science, vol. 63, no. 5, pp. 2540–2549, 2016.
  4. X. Pan, E. Y. Sidky, and M. Vannier, “Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?” Inverse Problems, vol. 25, no. 12, p. 123009, 2009.
  5. Z. Tian, X. Jia, K. Yuan, T. Pan, and S. B. Jiang, “Low-dose CT reconstruction via edge-preserving total variation regularization,” Physics in Medicine & Biology, vol. 56, no. 18, p. 5949, 2011.
  6. T. Wang, K. Nakamoto, H. Zhang, and H. Liu, “Reweighted anisotropic total variation minimization for limited-angle CT reconstruction,” IEEE Transactions on Nuclear Science, vol. 64, no. 10, pp. 2742–2760, 2017.
  7. Y. Han and J. C. Ye, “Framing u-net via deep convolutional framelets: Application to sparse-view ct,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1418–1429, 2018.
  8. E. Kang, J. Min, and J. C. Ye, “A deep convolutional neural network using directional wavelets for low-dose x-ray CT reconstruction,” Medical Physics, vol. 44, no. 10, pp. e360–e375, 2017.
  9. G. Zang, M. Aly, R. Idoughi, P. Wonka, and W. Heidrich, “Super-resolution and sparse view CT reconstruction,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 137–153.
  10. Z. Zhao, Y. Sun, and P. Cong, “Sparse-view CT reconstruction via generative adversarial networks,” in 2018 IEEE Nuclear Science Symposium and Medical Imaging Conference Proceedings (NSS/MIC), 2018, pp. 1–5.
  11. J. He, Y. Yang, Y. Wang, D. Zeng, Z. Bian, H. Zhang, J. Sun, Z. Xu, and J. Ma, “Optimizing a parameterized plug-and-play admm for iterative low-dose CT reconstruction,” IEEE Transactions on Medical Imaging, vol. 38, no. 2, pp. 371–382, 2018.
  12. H. Chen, Y. Zhang, Y. Chen, J. Zhang, W. Zhang, H. Sun, Y. Lv, P. Liao, J. Zhou, and G. Wang, “Learn: Learned experts’ assessment-based reconstruction network for sparse-data ct,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1333–1347, 2018.
  13. W. Xia, Z. Yang, Z. Lu, Z. Wang, and Y. Zhang, “Regformer: A local-nonlocal regularization-based model for sparse-view CT reconstruction,” IEEE Transactions on Radiation and Plasma Medical Sciences, 2023.
  14. K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Transactions on Image Processing, vol. 26, no. 9, pp. 4509–4522, 2017.
  15. D. Hu, J. Liu, T. Lv, Q. Zhao, Y. Zhang, G. Quan, J. Feng, Y. Chen, and L. Luo, “Hybrid-domain neural network processing for sparse-view CT reconstruction,” IEEE Transactions on Radiation and Plasma Medical Sciences, vol. 5, no. 1, pp. 88–98, 2021.
  16. H. Lee, J. Lee, and S. Cho, “View-Interpolation of Sparsely Sampled Sinogram using Convolutional Neural Network,” in Medical Imaging 2017: Image Processing, International Society for Optics and Photonics.   SPIE, 2017, pp. 617 – 624.
  17. L. Yang, R. Ge, S. Feng, and D. Zhang, “Learning projection views for sparse-view CT reconstruction,” ser. MM ’22.   New York, NY, USA: Association for Computing Machinery, 2022, p. 2645–2653. [Online]. Available: https://doi.org/10.1145/3503161.3548204
  18. H. Zhang, B. Liu, H. Yu, and B. Dong, “Metainv-net: Meta inversion network for sparse view ct image reconstruction,” IEEE Transactions on Medical Imaging, vol. 40, no. 2, pp. 621–634, 2020.
  19. H. Chen, Y. Zhang, M. K. Kalra, F. Lin, Y. Chen, P. Liao, J. Zhou, and G. Wang, “Low-dose CT with a residual encoder-decoder convolutional neural network,” IEEE Transactions on Medical Imaging, vol. 36, no. 12, pp. 2524–2535, 2017.
  20. L. Yang, Z. Li, R. Ge, J. Zhao, H. Si, and D. Zhang, “Low-dose CT denoising via sinogram inner-structure transformer,” IEEE Transactions on Medical Imaging, pp. 1–1, 2022.
  21. B. Zhu, J. Z. Liu, S. F. Cauley, B. R. Rosen, and M. S. Rosen, “Image reconstruction by domain-transform manifold learning,” Nature, vol. 555, no. 7697, pp. 487–492, 2018.
  22. J. He, Y. Wang, and J. Ma, “Radon inversion via deep learning,” IEEE Transactions on Medical Imaging, vol. 39, no. 6, pp. 2076–2087, 2020.
  23. P. Dhariwal and A. Nichol, “Diffusion models beat gans on image synthesis,” Advances in neural information processing systems, vol. 34, pp. 8780–8794, 2021.
  24. X. Li, J. Thickstun, I. Gulrajani, P. S. Liang, and T. B. Hashimoto, “Diffusion-lm improves controllable text generation,” Advances in Neural Information Processing Systems, vol. 35, pp. 4328–4343, 2022.
  25. A. Lugmayr, M. Danelljan, A. Romero, F. Yu, R. Timofte, and L. Van Gool, “Repaint: Inpainting using denoising diffusion probabilistic models,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 11 461–11 471.
  26. T. Amit, T. Shaharbany, E. Nachmani, and L. Wolf, “Segdiff: Image segmentation with diffusion probabilistic models,” arXiv preprint arXiv:2112.00390, 2021.
  27. R. S. Zimmermann, L. Schott, Y. Song, B. A. Dunn, and D. A. Klindt, “Score-based generative classifiers,” in NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications, 2021.
  28. J. Wolleb, F. Bieder, R. Sandkühler, and P. C. Cattin, “Diffusion models for medical anomaly detection,” in International Conference on Medical image computing and computer-assisted intervention.   Springer, 2022, pp. 35–45.
  29. B. Kim, I. Han, and J. C. Ye, “Diffusemorph: Unsupervised deformable image registration using diffusion model,” in European Conference on Computer Vision.   Springer, 2022, pp. 347–364.
  30. K. Packhäuser, L. Folle, F. Thamm, and A. Maier, “Generation of anonymous chest radiographs using latent diffusion models for training thoracic abnormality classification systems,” in 2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI).   IEEE, 2023, pp. 1–5.
  31. R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-resolution image synthesis with latent diffusion models,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 10 684–10 695.
  32. X. Meng, Y. Gu, Y. Pan, N. Wang, P. Xue, M. Lu, X. He, Y. Zhan, and D. Shen, “A novel unified conditional score-based generative framework for multi-modal medical image completion,” arXiv preprint arXiv:2207.03430, 2022.
  33. Y. Song, J. Sohl-Dickstein, D. P. Kingma, A. Kumar, S. Ermon, and B. Poole, “Score-based generative modeling through stochastic differential equations,” arXiv preprint arXiv:2011.13456, 2020.
  34. H. Chung and J. C. Ye, “Score-based diffusion models for accelerated mri,” Medical image analysis, vol. 80, p. 102479, 2022.
  35. J. Song, C. Meng, and S. Ermon, “Denoising diffusion implicit models,” arXiv preprint arXiv:2010.02502, 2020.
  36. J. Liu, R. Anirudh, J. J. Thiagarajan, S. He, K. A. Mohan, U. S. Kamilov, and H. Kim, “Dolce: A model-based probabilistic diffusion framework for limited-angle CT reconstruction,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 10 498–10 508.
  37. A. Bansal, E. Borgnia, H.-M. Chu, J. Li, H. Kazemi, F. Huang, M. Goldblum, J. Geiping, and T. Goldstein, “Cold diffusion: Inverting arbitrary image transforms without noise,” Advances in Neural Information Processing Systems, vol. 36, 2024.
  38. J. Huang, A. I. Aviles-Rivero, C.-B. Schönlieb, and G. Yang, “Cdiffmr: Can we replace the gaussian noise with k-space undersampling for fast mri?” in International Conference on Medical Image Computing and Computer-Assisted Intervention MICCAI.   Springer, 2023, pp. 3–12.
  39. T. R. Moen, B. Chen, D. R. Holmes III, X. Duan, Z. Yu, L. Yu, S. Leng, J. G. Fletcher, and C. H. McCollough, “Low-dose CT image and projection dataset,” Medical Physics, vol. 48, no. 2, pp. 902–911, 2021.
  40. K. Clark, B. Vendt, K. Smith, J. Freymann, J. Kirby, P. Koppel, S. Moore, S. Phillips, D. Maffitt, M. Pringle et al., “The cancer imaging archive (tcia): Maintaining and operating a public information repository,” Journal of Digital Imaging, vol. 26, no. 6, pp. 1045–1057, 2013.
  41. O. Akin, P. Elnajjar, M. Heller, and R. Jarosz, “The cancer genome atlas kidney renal clear cell carcinoma collection (tcga-kirc),” in The Cancer Imaging Archive, 2016.
  42. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
  43. M. Ronchetti, “Torchradon: Fast differentiable routines for computed tomography,” arXiv preprint arXiv:2009.14788, 2020.
  44. R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 586–595.
  45. Z. Zhang, X. Liang, X. Dong, Y. Xie, and G. Cao, “A sparse-view CT reconstruction method based on combination of densenet and deconvolution,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1407–1417, 2018.
  46. C. Ma, Z. Li, Y. Zhang, J. Zhang, and H. Shan, “Freeseed: Frequency-band-aware and self-guided network for sparse-view CT reconstruction,” in Medical Image Computing and Computer Assisted Intervention MICCAI, 2023.
  47. Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10 012–10 022.

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.