Papers
Topics
Authors
Recent
Search
2000 character limit reached

Enhancing Clinical Efficiency through LLM: Discharge Note Generation for Cardiac Patients

Published 8 Apr 2024 in cs.CL, cs.CV, and cs.LG | (2404.05144v1)

Abstract: Medical documentation, including discharge notes, is crucial for ensuring patient care quality, continuity, and effective medical communication. However, the manual creation of these documents is not only time-consuming but also prone to inconsistencies and potential errors. The automation of this documentation process using AI represents a promising area of innovation in healthcare. This study directly addresses the inefficiencies and inaccuracies in creating discharge notes manually, particularly for cardiac patients, by employing AI techniques, specifically LLM. Utilizing a substantial dataset from a cardiology center, encompassing wide-ranging medical records and physician assessments, our research evaluates the capability of LLM to enhance the documentation process. Among the various models assessed, Mistral-7B distinguished itself by accurately generating discharge notes that significantly improve both documentation efficiency and the continuity of care for patients. These notes underwent rigorous qualitative evaluation by medical expert, receiving high marks for their clinical relevance, completeness, readability, and contribution to informed decision-making and care planning. Coupled with quantitative analyses, these results confirm Mistral-7B's efficacy in distilling complex medical information into concise, coherent summaries. Overall, our findings illuminate the considerable promise of specialized LLM, such as Mistral-7B, in refining healthcare documentation workflows and advancing patient care. This study lays the groundwork for further integrating advanced AI technologies in healthcare, demonstrating their potential to revolutionize patient documentation and support better care outcomes.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (13)
  1. Meditron-70b: Scaling medical pretraining for large language models. arXiv preprint arXiv:2311.16079, 2023.
  2. Qlora: Efficient finetuning of quantized llms. Advances in Neural Information Processing Systems, 36, 2024.
  3. Daniel Han. unsloth. https://github.com/unslothai/unsloth, 2023. Accessed: 2024-03-05.
  4. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023.
  5. Solar 10.7 b: Scaling large language models with simple yet effective depth up-scaling. arXiv preprint arXiv:2312.15166, 2023.
  6. Biomistral: A collection of open-source pretrained large language models for medical domains. arXiv preprint arXiv:2402.10373, 2024.
  7. Peft: State-of-the-art parameter-efficient fine-tuning methods. https://github.com/huggingface/peft, 2022.
  8. Lessons learned from development of de-identification system for biomedical research in a korean tertiary hospital. Healthcare Informatics Research, 19(2):102–109, 2013.
  9. Towards clinical encounter summarization: Learning to compose discharge summaries from prior notes. arXiv preprint arXiv:2104.13498, 2021.
  10. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
  11. Large language models in health care: Development, applications, and challenges. Health Care Science, 2(4):255–263, 2023a.
  12. Radiology report generation with a learned knowledge base and multi-modal alignment. Medical Image Analysis, 86:102798, 2023b.
  13. Tinyllama: An open-source small language model. arXiv preprint arXiv:2401.02385, 2024.
Citations (9)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.