Papers
Topics
Authors
Recent
Search
2000 character limit reached

COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs

Published 12 Oct 2020 in cs.CL | (2010.05953v2)

Abstract: Recent years have brought about a renewed interest in commonsense representation and reasoning in the field of natural language understanding. The development of new commonsense knowledge graphs (CSKG) has been central to these advances as their diverse facts can be used and referenced by machine learning models for tackling new and challenging tasks. At the same time, there remain questions about the quality and coverage of these resources due to the massive scale required to comprehensively encompass general commonsense knowledge. In this work, we posit that manually constructed CSKGs will never achieve the coverage necessary to be applicable in all situations encountered by NLP agents. Therefore, we propose a new evaluation framework for testing the utility of KGs based on how effectively implicit knowledge representations can be learned from them. With this new goal, we propose ATOMIC 2020, a new CSKG of general-purpose commonsense knowledge containing knowledge that is not readily available in pretrained LLMs. We evaluate its properties in comparison with other leading CSKGs, performing the first large-scale pairwise study of commonsense knowledge resources. Next, we show that ATOMIC 2020 is better suited for training knowledge models that can generate accurate, representative knowledge for new, unseen entities and events. Finally, through human evaluation, we show that the few-shot performance of GPT-3 (175B parameters), while impressive, remains ~12 absolute points lower than a BART-based knowledge model trained on ATOMIC 2020 despite using over 430x fewer parameters.

Citations (376)

Summary

  • The paper introduces Atomic20, a new commonsense knowledge graph built with 1.33M tuples across 23 relations to boost language model reasoning.
  • The authors propose an evaluation framework that compares Atomic20 with other CSKGs, demonstrating its superior coverage and accuracy.
  • Training COMET models on Atomic20 significantly improves commonsense inference, highlighting the value of integrating symbolic and neural knowledge.

COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs

The paper "COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs" presents an in-depth analysis and evaluation of commonsense knowledge graphs (CSKGs) and their utility in NLP. The authors explore the inherent limitations of manually constructed CSKGs in providing comprehensive commonsense knowledge and propose a novel commonsense knowledge graph, Atomic20^{20}, to address these limitations. Atomic20^{20} is designed to include knowledge not typically represented in pre-trained LMs, offering a rich resource for training neural knowledge models.

Key Contributions

  1. Atomic20^{20} Development: The authors introduce Atomic20^{20}, a new CSKG that incorporates 1.33 million tuples across 23 types of commonsense relations. This graph includes social, physical, and event-centered commonsense knowledge beyond the capabilities of current pre-trained LLMs.
  2. Evaluation Framework: The paper proposes an evaluation framework that assesses the utility of CSKGs based on their ability to complement pre-trained LLMs in generating previously unseen, accurate commonsense knowledge.
  3. Comparative Analysis: A comprehensive comparison between Atomic20^{20} and other prominent CSKGs such as ConceptNet and TransOMCS is conducted. Atomic20^{20} is shown to provide superior coverage and accuracy, illustrating its effectiveness as a training set for adapative LLMs.
  4. Neural Knowledge Model (COMET): The paper extends the COMET framework to evaluate how well knowledge models adapted from LLMs by training on different knowledge graphs can hypothesize plausible knowledge for new entities. It is demonstrated that Atomic20^{20}, when used to train COMET models, allows for significant improvements in the generation of commonsense knowledge compared to few-shot prompts in models like GPT-3.

Implications and Future Directions

The results highlight the continued necessity for developing high-quality CSKGs, especially those that offer complementary information to LLMs. The empirical findings suggest that while pre-trained LMs hold implicit commonsense knowledge, their ability to express it effectively remains limited without additional structured knowledge resources such as Atomic20^{20}. This underscores the potential utility of CSKGs in applications requiring robust commonsense reasoning capabilities.

Furthermore, Atomic20^{20} reinforces the proposition that LMs augmented by knowledge graphs exhibit richer capabilities in commonsense reasoning than LMs alone. This paper not only contributes a valuable resource to the field but opens pathways for novel applications that leverage these enhanced capabilities in AI systems. Researchers are encouraged to explore the integration of CSKGs like Atomic20^{20} into broader AI systems and to evaluate their impact on tasks beyond standard NLP benchmarks, potentially including areas such as story generation, chatbots, and interactive agents.

The encouraging results obtained with Atomic20^{20} advocate for continued exploration into the interplay between symbolic and neural representations of knowledge. Future research may explore designing resource-efficient models that use CSKGs to minimize the parameter size while maintaining or enhancing the expressiveness of LLMs. This future trajectory holds promise, not only for technological advancements but also for our understanding of the role of structured knowledge in cognitive modeling.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.