Prior-guided Diffusion Model for Cell Segmentation in Quantitative Phase Imaging
Abstract: Purpose: Quantitative phase imaging (QPI) is a label-free technique that provides high-contrast images of tissues and cells without the use of chemicals or dyes. Accurate semantic segmentation of cells in QPI is essential for various biomedical applications. While DM-based segmentation has demonstrated promising results, the requirement for multiple sampling steps reduces efficiency. This study aims to enhance DM-based segmentation by introducing prior-guided content information into the starting noise, thereby minimizing inefficiencies associated with multiple sampling. Approach: A prior-guided mechanism is introduced into DM-based segmentation, replacing randomly sampled starting noise with noise informed by content information. This mechanism utilizes another trained DM and DDIM inversion to incorporate content information from the to-be-segmented images into the starting noise. An evaluation method is also proposed to assess the quality of the starting noise, considering both content and distribution information. Results: Extensive experiments on various QPI datasets for cell segmentation showed that the proposed method achieved superior performance in DM-based segmentation with only a single sampling. Ablation studies and visual analysis further highlighted the significance of content priors in DM-based segmentation. Conclusion: The proposed method effectively leverages prior content information to improve DM-based segmentation, providing accurate results while reducing the need for multiple samplings. The findings emphasize the importance of integrating content priors into DM-based segmentation methods for optimal performance.
- Y. Park, C. Depeursinge, and G. Popescu, “Quantitative phase imaging in biomedicine,” Nature photonics 12(10), 578–589 (2018).
- M. Mir, B. Bhaduri, R. Wang, et al., “Quantitative phase imaging,” in Progress in optics, 57, 133–217, Elsevier (2012).
- T. Vicar, J. Balvan, J. Jaros, et al., “Cell segmentation methods for label-free contrast microscopy: review and comprehensive comparison,” BMC bioinformatics 20, 1–25 (2019).
- J. Park, B. Bai, D. Ryu, et al., “Artificial intelligence-enabled quantitative phase imaging methods for life sciences,” Nature Methods 20(11), 1645–1660 (2023).
- Y. Jo, H. Cho, S. Y. Lee, et al., “Quantitative phase imaging and artificial intelligence: a review,” IEEE Journal of Selected Topics in Quantum Electronics 25(1), 1–14 (2018).
- C. Hu, S. He, Y. J. Lee, et al., “Live-dead assay on unlabeled cells using phase imaging with computational specificity,” Nature communications 13(1), 713 (2022).
- C. Stringer, T. Wang, M. Michaelos, et al., “Cellpose: a generalist algorithm for cellular segmentation,” Nature methods 18(1), 100–106 (2021).
- O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Germany, 2015, 234–241, Springer (2015).
- X.-X. Yin, L. Sun, Y. Fu, et al., “U-net-based medical image segmentation,” Journal of Healthcare Engineering 2022 (2022).
- F. Isensee, P. F. Jaeger, S. A. Kohl, et al., “nnu-net: a self-configuring method for deep learning-based biomedical image segmentation,” Nature methods 18(2), 203–211 (2021).
- N. Siddique, S. Paheding, C. P. Elkin, et al., “U-net and its variants for medical image segmentation: A review of theory and applications,” Ieee Access 9, 82031–82057 (2021).
- J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” Advances in Neural Information Processing Systems 33, 6840–6851 (2020).
- R. Rombach, A. Blattmann, D. Lorenz, et al., “High-resolution image synthesis with latent diffusion models,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 10684–10695 (2022).
- L. Zbinden, L. Doorenbos, T. Pissas, et al., “Stochastic segmentation with conditional categorical diffusion models,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 1119–1129 (2023).
- M. Sun, W. Huang, and Y. Zheng, “Instance-aware diffusion model for gland segmentation in colon histology images,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, 662–672, Springer (2023).
- T. Chen, C. Wang, and H. Shan, “Berdiff: Conditional bernoulli diffusion model for medical image segmentation,” arXiv preprint arXiv:2304.04429 (2023).
- J. Wolleb, R. Sandkühler, F. Bieder, et al., “Diffusion models for implicit image segmentation ensembles,” in International Conference on Medical Imaging with Deep Learning, 1336–1348, PMLR (2022).
- T. Amit, T. Shaharbany, E. Nachmani, et al., “Segdiff: Image segmentation with diffusion probabilistic models,” arXiv preprint arXiv:2112.00390 (2021).
- J. Song, C. Meng, and S. Ermon, “Denoising diffusion implicit models,” arXiv preprint arXiv:2010.02502 (2020).
- T. Salimans and J. Ho, “Progressive distillation for fast sampling of diffusion models,” arXiv preprint arXiv:2202.00512 (2022).
- H. Zheng, W. Nie, A. Vahdat, et al., “Fast sampling of diffusion models via operator learning,” in International Conference on Machine Learning, 42390–42402, PMLR (2023).
- F.-A. Croitoru, V. Hondru, R. T. Ionescu, et al., “Diffusion models in vision: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence (2023).
- Z. Shao, S. Sengupta, H. Li, et al., “Semi-supervised semantic segmentation of cell nuclei via diffusion-based large-scale pre-training and collaborative learning,” arXiv preprint arXiv:2308.04578 (2023).
- Z. Shao, L. Dai, Y. Wang, et al., “Augdiff: Diffusion based feature augmentation for multiple instance learning in whole slide image,” arXiv preprint arXiv:2303.06371 (2023).
- X. Su, J. Song, C. Meng, et al., “Dual diffusion implicit bridges for image-to-image translation,” arXiv preprint arXiv:2203.08382 (2022).
- N. Tumanyan, M. Geyer, S. Bagon, et al., “Plug-and-play diffusion features for text-driven image-to-image translation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1921–1930 (2023).
- D. Miyake, A. Iohara, Y. Saito, et al., “Negative-prompt inversion: Fast image inversion for editing with text-guided diffusion models,” arXiv preprint arXiv:2305.16807 (2023).
- K. He, X. Zhang, S. Ren, et al., “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778 (2016).
- M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in International conference on machine learning, 6105–6114, PMLR (2019).
- L.-C. Chen, G. Papandreou, F. Schroff, et al., “Rethinking atrous convolution for semantic image segmentation,” arXiv preprint arXiv:1706.05587 (2017).
- M. R. Zhang and J. Lucas, “Lookahead optimizer: k steps forward, 1 step back,” in International Conference on Learning Representations, (2019).
- C. H. Sudre, W. Li, T. Vercauteren, et al., “Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, Held in Conjunction with MICCAI 2017, Québec City, QC, Canada, September 14, Proceedings 3, 240–248, Springer (2017).
- M. Caron, H. Touvron, I. Misra, et al., “Emerging properties in self-supervised vision transformers,” in Proceedings of the International Conference on Computer Vision (ICCV), (2021).
- L. Ding and A. Goshtasby, “On the canny edge detector,” Pattern recognition 34(3), 721–725 (2001).
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.