Papers
Topics
Authors
Recent
Search
2000 character limit reached

Graphical Reasoning: LLM-based Semi-Open Relation Extraction

Published 30 Apr 2024 in cs.CL, cs.AI, and cs.LG | (2405.00216v1)

Abstract: This paper presents a comprehensive exploration of relation extraction utilizing advanced LLMs, specifically Chain of Thought (CoT) and Graphical Reasoning (GRE) techniques. We demonstrate how leveraging in-context learning with GPT-3.5 can significantly enhance the extraction process, particularly through detailed example-based reasoning. Additionally, we introduce a novel graphical reasoning approach that dissects relation extraction into sequential sub-tasks, improving precision and adaptability in processing complex relational data. Our experiments, conducted on multiple datasets, including manually annotated data, show considerable improvements in performance metrics, underscoring the effectiveness of our methodologies.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (12)
  1. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
  2. Rebel: Relation extraction by end-to-end language generation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2370–2381, 2021.
  3. Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In Proceedings of the ACM Web conference 2022, pages 2778–2788, 2022.
  4. Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports. Journal of Biomedical Informatics, http://dx.doi.org/10.1016/j.jbi.2012.04.008, 04 2012. doi: 10.1016/j.jbi.2012.04.008.
  5. Ptr: Prompt tuning with rules for text classification. AI Open, 3:182–192, 2022.
  6. Revisiting large language models as zero-shot relation extractors. arXiv preprint arXiv:2310.05028, 2023.
  7. Modeling relations and their mentions without labeled text. In ECML/PKDD, 2010. URL https://api.semanticscholar.org/CorpusID:2386383.
  8. Improving and simplifying pattern exploiting training, 2021.
  9. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
  10. Revisiting relation extraction in the era of large language models. In Proceedings of the conference. Association for Computational Linguistics. Meeting, volume 2023, page 15566. NIH Public Access, 2023.
  11. Chain-of-thought prompting elicits reasoning in large language models, 2023.
  12. Tree of thoughts: Deliberate problem solving with large language models, 2023.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (3)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.