Papers
Topics
Authors
Recent
Search
2000 character limit reached

ODA: Observation-Driven Agent for integrating LLMs and Knowledge Graphs

Published 11 Apr 2024 in cs.CL and cs.AI | (2404.07677v2)

Abstract: The integration of LLMs and knowledge graphs (KGs) has achieved remarkable success in various natural language processing tasks. However, existing methodologies that integrate LLMs and KGs often navigate the task-solving process solely based on the LLM's analysis of the question, overlooking the rich cognitive potential inherent in the vast knowledge encapsulated in KGs. To address this, we introduce Observation-Driven Agent (ODA), a novel AI agent framework tailored for tasks involving KGs. ODA incorporates KG reasoning abilities via global observation, which enhances reasoning capabilities through a cyclical paradigm of observation, action, and reflection. Confronting the exponential explosion of knowledge during observation, we innovatively design a recursive observation mechanism. Subsequently, we integrate the observed knowledge into the action and reflection modules. Through extensive experiments, ODA demonstrates state-of-the-art performance on several datasets, notably achieving accuracy improvements of 12.87% and 8.9%.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. Knowledge-augmented language model prompting for zero-shot knowledge graph question answering. arXiv preprint arXiv:2306.04136.
  2. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
  3. Autoagents: A framework for automatic agent generation. arXiv preprint arXiv:2309.17288.
  4. Constraintchecker: A plugin for large language models to reason on commonsense knowledge bases. arXiv preprint arXiv:2401.14003.
  5. T-REx: A large scale alignment of natural language with knowledge base triples. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
  6. Re2G: Retrieve, rerank, generate. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2701–2715, Seattle, United States. Association for Computational Linguistics.
  7. Knowledgenavigator: Leveraging large language models for enhanced reasoning over knowledge graph. arXiv preprint arXiv:2312.15880.
  8. Structgpt: A general framework for large language model to reason over structured data. arXiv preprint arXiv:2305.09645.
  9. Chain-of-knowledge: Grounding large language models via dynamic knowledge adapting over heterogeneous sources. arXiv preprint arXiv:2305.13269.
  10. Reasoning on graphs: Faithful and interpretable large language model reasoning.
  11. Crosslingual generalization through multitask finetuning. arXiv preprint arXiv:2211.01786.
  12. Creak: A dataset for commonsense reasoning over entity knowledge.
  13. R OpenAI. 2023. Gpt-4 technical report. arXiv, pages 2303–08774.
  14. Unifying large language models and knowledge graphs: A roadmap. IEEE Transactions on Knowledge and Data Engineering.
  15. Qald-9-plus: A multilingual dataset for question answering over dbpedia and wikidata translated by native speakers. In 2022 IEEE 16th International Conference on Semantic Computing (ICSC), pages 229–234.
  16. KILT: a benchmark for knowledge intensive language tasks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2523–2544, Online. Association for Computational Linguistics.
  17. SPARQLQA enters the QALD challenge. In Proceedings of the 7th Natural Language Interfaces for the Web of Data (NLIWoD) co-located with the 19th European Semantic Web Conference (ESWC 2022), volume 3196 of CEUR Workshop Proceedings, pages 25–31, Hersonissos, Greece. CEUR-WS.org.
  18. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100.
  19. Think-on-graph: Deep and responsible reasoning of large language model on knowledge graph.
  20. Think-on-graph: Deep and responsible reasoning of large language model with knowledge graph. arXiv preprint arXiv:2307.07697.
  21. Rotate: Knowledge graph embedding by relational rotation in complex space.
  22. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
  23. keqing: knowledge-based question answering is a nature chain-of-thought mentor of llm. arXiv preprint arXiv:2401.00426.
  24. Boosting language models reasoning with chain-of-knowledge prompting. arXiv preprint arXiv:2306.06427.
  25. Self-consistency improves chain of thought reasoning in language models.
  26. Autogen: Enabling next-gen llm applications via multi-agent conversation framework. arXiv preprint arXiv:2308.08155.
  27. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629.
  28. Retrieval augmentation for commonsense reasoning: A unified approach. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 4364–4377, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  29. Making large language models perform better in knowledge graph completion. arXiv preprint arXiv:2310.06671.
Citations (2)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.