Papers
Topics
Authors
Recent
Search
2000 character limit reached

RJUA-QA: A Comprehensive QA Dataset for Urology

Published 15 Dec 2023 in cs.CL | (2312.09785v3)

Abstract: We introduce RJUA-QA, a novel medical dataset for question answering (QA) and reasoning with clinical evidence, contributing to bridge the gap between general LLMs and medical-specific LLM applications. RJUA-QA is derived from realistic clinical scenarios and aims to facilitate LLMs in generating reliable diagnostic and advice. The dataset contains 2,132 curated Question-Context-Answer pairs, corresponding about 25,000 diagnostic records and clinical cases. The dataset covers 67 common urological disease categories, where the disease coverage exceeds 97.6\% of the population seeking medical services in urology. Each data instance in RJUA-QA comprises: (1) a question mirroring real patient to inquiry about clinical symptoms and medical conditions, (2) a context including comprehensive expert knowledge, serving as a reference for medical examination and diagnosis, (3) a doctor response offering the diagnostic conclusion and suggested examination guidance, (4) a diagnosed clinical disease as the recommended diagnostic outcome, and (5) clinical advice providing recommendations for medical examination. RJUA-QA is the first medical QA dataset for clinical reasoning over the patient inquiries, where expert-level knowledge and experience are required for yielding diagnostic conclusions and medical examination advice. A comprehensive evaluation is conducted to evaluate the performance of both medical-specific and general LLMs on the RJUA-QA dataset. Our data is are publicly available at \url{https://github.com/alipay/RJU_Ant_QA}.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)
  1. Anmol Arora and Ananya Arora. 2023. The promise of large language models in health care. The Lancet, 401(10377):641.
  2. Qwen technical report. arXiv preprint arXiv:2309.16609.
  3. Baichuan. 2023. Baichuan 2: Open large-scale language models. arXiv preprint arXiv:2309.10305.
  4. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
  5. Kathi Canese and Sarah Weis. 2013. Pubmed: the bibliographic database. The NCBI handbook, 2(1).
  6. PubMedQA: A dataset for biomedical research question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2567–2577, Hong Kong, China. Association for Computational Linguistics.
  7. Kiran Kamble and Waseem Alshikh. 2023. Palmyra-med: Instruction-based fine-tuning of llms enhancing medical domain performance.
  8. From beginner to expert: Modeling medical knowledge into general llms.
  9. Think-in-memory: Recalling and post-thinking enable llms with long-term memory.
  10. Can large language models reason about medical questions?
  11. Capabilities of gpt-4 on medical challenge problems.
  12. OpenAI. 2022. Chatgpt.
  13. OpenAI. 2023. Gpt-4 technical report.
  14. A study of generative large language model for medical research and healthcare.
  15. Large language models encode clinical knowledge.
  16. Towards expert-level medical question answering with large language models.
  17. Glm-130b: An open bilingual pre-trained model. arXiv preprint arXiv:2210.02414.
  18. Huatuogpt, towards taming language model to be a doctor. arXiv preprint arXiv:2305.15075.
Citations (3)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.