Papers
Topics
Authors
Recent
Search
2000 character limit reached

Multi-Modal Federated Learning for Cancer Staging over Non-IID Datasets with Unbalanced Modalities

Published 7 Jan 2024 in cs.LG and cs.AI | (2401.03609v3)

Abstract: The use of ML for cancer staging through medical image analysis has gained substantial interest across medical disciplines. When accompanied by the innovative federated learning (FL) framework, ML techniques can further overcome privacy concerns related to patient data exposure. Given the frequent presence of diverse data modalities within patient records, leveraging FL in a multi-modal learning framework holds considerable promise for cancer staging. However, existing works on multi-modal FL often presume that all data-collecting institutions have access to all data modalities. This oversimplified approach neglects institutions that have access to only a portion of data modalities within the system. In this work, we introduce a novel FL architecture designed to accommodate not only the heterogeneity of data samples, but also the inherent heterogeneity/non-uniformity of data modalities across institutions. We shed light on the challenges associated with varying convergence speeds observed across different data modalities within our FL system. Subsequently, we propose a solution to tackle these challenges by devising a distributed gradient blending and proximity-aware client weighting strategy tailored for multi-modal FL. To show the superiority of our method, we conduct experiments using The Cancer Genome Atlas program (TCGA) datalake considering different cancer types and three modalities of data: mRNA sequences, histopathological image data, and clinical information. Our results further unveil the impact and severity of class-based vs type-based heterogeneity across institutions on the model performance, which widens the perspective to the notion of data heterogeneity in multi-modal FL literature.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. E. Wulczyn, D. F. Steiner, Z. Xu, A. Sadhwani, H. Wang, I. Flament-Auvigne, C. H. Mermel, P.-H. C. Chen, Y. Liu, and M. C. Stumpe, “Deep learning-based survival prediction for multiple cancer types using histopathology images,” PloS one, vol. 15, no. 6, p. e0233678, 2020.
  2. S. Sarkar, K. Min, W. Ikram, R. W. Tatton, I. B. Riaz, A. C. Silva, A. H. Bryce, C. Moore, T. H. Ho, G. Sonpavde, H. M. Abdul-Muhsin, P. Singh, and T. Wu, “Performing automatic identification and staging of urothelial carcinoma in bladder cancer patients using a hybrid deep-machine learning approach,” Cancers, vol. 15, p. 1673, Mar 2023.
  3. K. M. Fathalla, S. M. Youssef, and N. Mohammed, “DETECT-LC: A 3D deep learning and textural radiomics computational model for lung cancer staging and tumor phenotyping based on computed tomography volumes,” Applied Sciences, vol. 12, p. 6318, Jun 2022.
  4. J. Xie, R. Liu, J. Luttrell, and C. Zhang, “Deep learning based analysis of histopathological images of breast cancer,” Frontiers in Genetics, vol. 10, 2019.
  5. G. Litjens, C. I. Sánchez, N. Timofeeva, M. Hermsen, I. Nagtegaal, I. Kovacs, C. Hulsbergen-Van De Kaa, P. Bult, B. Van Ginneken, and J. Van Der Laak, “Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis,” Scientific reports, vol. 6, no. 1, p. 26286, 2016.
  6. A. Hartenstein, F. Lübbe, A. D. Baur, M. M. Rudolph, C. Furth, W. Brenner, H. Amthauer, B. Hamm, M. Makowski, and T. Penzkofer, “Prostate cancer nodal staging: using deep learning to predict 68ga-psma-positivity from ct imaging alone,” Scientific reports, vol. 10, no. 1, p. 3398, 2020.
  7. N. Hadjiyski, “Kidney cancer staging: Deep learning neural network based approach,” in 2020 International Conference on e-Health and Bioengineering (EHB), pp. 1–4, IEEE, 2020.
  8. Y. Huang, C. Du, Z. Xue, X. Chen, H. Zhao, and L. Huang, “What makes multi-modal learning better than single (provably),” Advances in Neural Information Processing Systems, vol. 34, pp. 10944–10956, 2021.
  9. W. Shao, T. Wang, L. Sun, T. Dong, Z. Han, Z. Huang, J. Zhang, D. Zhang, and K. Huang, “Multi-task multi-modal learning for joint diagnosis and prognosis of human cancers,” Medical Image Analysis, vol. 65, p. 101795, 2020.
  10. J. Venugopalan, L. Tong, H. R. Hassanzadeh, and M. D. Wang, “Multimodal deep learning models for early detection of alzheimer’s disease stage,” Scientific reports, vol. 11, no. 1, p. 3254, 2021.
  11. D. Sun, M. Wang, and A. Li, “A multimodal deep neural network for human breast cancer prognosis prediction by integrating multi-dimensional data,” IEEE/ACM transactions on computational biology and bioinformatics, vol. 16, no. 3, pp. 841–850, 2018.
  12. M. Xu, L. Ouyang, Y. Gao, Y. Chen, T. Yu, Q. Li, K. Sun, F. S. Bao, L. Safarnejad, J. Wen, et al., “Accurately differentiating COVID-19, other viral infection, and healthy individuals using multimodal features via late fusion learning,” medRxiv, pp. 2020–08, 2020.
  13. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y. Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (A. Singh and J. Zhu, eds.), vol. 54 of Proc. Machine Learn. Res., pp. 1273–1282, PMLR, 20–22 Apr 2017.
  14. N. Rieke, J. Hancox, W. Li, F. Milletari, H. R. Roth, S. Albarqouni, S. Bakas, M. N. Galtier, B. A. Landman, K. Maier-Hein, et al., “The future of digital health with federated learning,” NPJ digital medicine, vol. 3, no. 1, p. 119, 2020.
  15. B. Pfitzner, N. Steckhan, and B. Arnrich, “Federated learning in a medical context: a systematic literature review,” ACM Transactions on Internet Technology (TOIT), vol. 21, no. 2, pp. 1–31, 2021.
  16. J. Lee, J. Sun, F. Wang, S. Wang, C.-H. Jun, and X. Jiang, “Privacy-preserving patient similarity learning in a federated environment: Development and analysis,” JMIR Med Inform, vol. 6, p. e20, Apr 2018.
  17. A. Qayyum, K. Ahmad, M. A. Ahsan, A. Al-Fuqaha, and J. Qadir, “Collaborative federated learning for healthcare: Multi-modal COVID-19 diagnosis at the edge,” IEEE Open Journal of the Computer Society, vol. 3, pp. 172–184, 2022.
  18. S. Park, G. Kim, J. Kim, B. Kim, and J. C. Ye, “Federated split vision transformer for COVID-19 CXR diagnosis using task-agnostic training,” arXiv preprint arXiv:2111.01338, 2021.
  19. C.-M. Feng, Y. Yan, S. Wang, Y. Xu, L. Shao, and H. Fu, “Specificity-preserving federated learning for mr image reconstruction,” IEEE Transactions on Medical Imaging, 2022.
  20. F. Cremonesi, V. Planat, V. Kalokyri, H. Kondylakis, T. Sanavia, V. Miguel Mateos Resinas, B. Singh, and S. Uribe, “The need for multimodal health data modeling: A practical approach for a federated-learning healthcare platform,” J. Biomed. Inform., vol. 141, p. 104338, 2023.
  21. W. Wang, D. Tran, and M. Feiszli, “What makes training multi-modal classification networks hard?,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12695–12705, 2020.
  22. K. M. Boehm, E. A. Aherne, L. Ellenson, I. Nikolovski, M. Alghamdi, I. Vázquez-García, D. Zamarin, K. Long Roche, Y. Liu, D. Patel, et al., “Multimodal data integration using machine learning improves risk stratification of high-grade serous ovarian cancer,” Nature cancer, vol. 3, no. 6, pp. 723–733, 2022.
  23. I. Olatunji and F. Cui, “Multimodal ai for prediction of distant metastasis in carcinoma patients,” Frontiers in Bioinformatics, vol. 3, p. 1131021, 2023.
  24. G. Kim, S. Moon, and J.-H. Choi, “Deep learning with multimodal integration for predicting recurrence in patients with non-small cell lung cancer,” Sensors, vol. 22, p. 6594, Aug 2022.
  25. Z. Yang, M. J. LaRiviere, J. Ko, J. E. Till, T. Christensen, S. S. Yee, T. A. Black, K. Tien, A. Lin, H. Shen, N. Bhagwat, D. Herman, A. Adallah, M. H. O’Hara, C. M. Vollmer, B. W. Katona, B. Z. Stanger, D. Issadore, and E. L. Carpenter, “A Multianalyte Panel Consisting of Extracellular Vesicle miRNAs and mRNAs, cfDNA, and CA19-9 Shows Utility for Diagnosis and Staging of Pancreatic Ductal Adenocarcinoma,” Clinical Cancer Research, vol. 26, pp. 3248–3258, 07 2020.
  26. S. Truex, N. Baracaldo, A. Anwar, T. Steinke, H. Ludwig, R. Zhang, and Y. Zhou, “A hybrid approach to privacy-preserving federated learning,” in Proceedings of the 12th ACM workshop on artificial intelligence and security, pp. 1–11, 2019.
  27. M. G. Arivazhagan, V. Aggarwal, A. K. Singh, and S. Choudhary, “Federated learning with personalization layers,” arXiv preprint arXiv:1912.00818, 2019.
  28. L. Yi, J. Zhang, R. Zhang, J. Shi, G. Wang, and X. Liu, “Su-net: an efficient encoder-decoder model of federated learning for brain tumor segmentation,” in International Conference on Artificial Neural Networks, pp. 761–773, Springer, 2020.
  29. M. J. Sheller, G. A. Reina, B. Edwards, J. Martin, and S. Bakas, “Multi-institutional deep learning modeling without sharing patient data: A feasibility study on brain tumor segmentation,” in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 4th International Workshop, BrainLes 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Revised Selected Papers, Part I 4, pp. 92–104, Springer, 2019.
  30. A. Nandi and F. Xhafa, “A federated learning method for real-time emotion state classification from multi-modal streaming,” Methods, vol. 204, pp. 340–347, 2022.
  31. S. Chen and B. Li, “Towards optimal multi-modal federated learning on non-iid data with hierarchical gradient blending,” in Proc. IEEE Conference on Computer Communications (INFOCOM), pp. 1469–1478, 2022.
  32. B. Xiong, X. Yang, F. Qi, and C. Xu, “A unified framework for multi-modal federated learning,” Neurocomputing, vol. 480, pp. 110–118, 2022.
  33. B. L. Y. Agbley, J. Li, A. U. Haq, E. K. Bankas, S. Ahmad, I. O. Agyemang, D. Kulevome, W. D. Ndiaye, B. Cobbinah, and S. Latipova, “Multimodal melanoma detection with federated learning,” in 2021 18th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), pp. 238–244, IEEE, 2021.
  34. I. J. Good, “Rational decisions,” Journal of the Royal Statistical Society: Series B (Methodological), vol. 14, no. 1, pp. 107–114, 1952.
  35. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “Pytorch: An imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems 32, pp. 8024–8035, Curran Associates, Inc., 2019.
  36. J. Cheng, J. Zhang, Y. Han, X. Wang, X. Ye, Y. Meng, A. Parwani, Z. Han, Q. Feng, and K. Huang, “Integrative Analysis of Histopathological Images and Genomic Data Predicts Clear Cell Renal Cell Carcinoma Prognosis,” Cancer Research, vol. 77, pp. e91–e100, 10 2017.
  37. H. A. Phoulady, D. B. Goldgof, L. O. Hall, and P. R. Mouton, “Nucleus segmentation in histology images with hierarchical multilevel thresholding,” in Medical Imaging 2016: Digital Pathology (M. N. Gurcan and A. Madabhushi, eds.), vol. 9791, p. 979111, International Society for Optics and Photonics, SPIE, 2016.
Citations (4)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.