Papers
Topics
Authors
Recent
Search
2000 character limit reached

Grounding 'Grounding' in NLP

Published 4 Jun 2021 in cs.CL | (2106.02192v1)

Abstract: The NLP community has seen substantial recent interest in grounding to facilitate interaction between language technologies and the world. However, as a community, we use the term broadly to reference any linking of text to data or non-textual modality. In contrast, Cognitive Science more formally defines "grounding" as the process of establishing what mutual information is required for successful communication between two interlocutors -- a definition which might implicitly capture the NLP usage but differs in intent and scope. We investigate the gap between these definitions and seek answers to the following questions: (1) What aspects of grounding are missing from NLP tasks? Here we present the dimensions of coordination, purviews and constraints. (2) How is the term "grounding" used in the current research? We study the trends in datasets, domains, and tasks introduced in recent NLP conferences. And finally, (3) How to advance our current definition to bridge the gap with Cognitive Science? We present ways to both create new tasks or repurpose existing ones to make advancements towards achieving a more complete sense of grounding.

Citations (49)

Summary

Grounding 'Grounding' in NLP: A Critical Analysis

The paper “Grounding 'Grounding' in NLP” by Khyathi Raghavi Chandu et al. provides an in-depth analysis of the NLP community’s understanding and application of the concept of grounding, contrasting it with more stringent definitions from cognitive science. The authors critically evaluate the inconsistency in the use of grounding across NLP tasks and propose a framework to align it more closely with cognitive science principles, aiming to enhance the efficacy of human-computer interaction.

The analysis reveals three major dimensions—Coordination, Purviews, and Constraints—that are critical in bridging the gap between current NLP grounding practices and a comprehensive definition inspired by cognitive science. Coordination refers to the interaction dynamics between a human and an AI agent, distinguishing between static grounding, where assumptions or known truths are predefined, and dynamic grounding, which necessitates the co-construction of mutual understanding through dialogues and iterations. The paper underscores dynamic grounding as a more holistic approach that better simulates human-like communication.

Purviews illustrate the stages of establishing a common ground: localization of concepts, integration with external knowledge, incorporation of contextual common sense, and adaptation to personalized consensus. Each stage addresses different aspects of grounding that are currently treated disparately by NLP tasks, but are interdependent in real-world applications. The paper advocates for a longitudinal benchmarking approach, encouraging models to engage with these purviews seamlessly, thus offering a more realistic view of communication.

Constraints, imposed by the medium and modes of interaction, point to several underexplored aspects in the current grounding efforts—especially simultaneity, sequentiality, and revisability. The paper stresses the importance of focusing on these constraints to better approximate natural, real-world dialogue settings and promote a more fluid exchange of information between humans and AI systems.

The authors provide a comprehensive survey of existing datasets and techniques, detailing the prevalence of grounding in various modalities and tasks. Despite trends indicating substantial efforts in expanding datasets and annotating tasks, much of the work remains confined to static modalities, with significant potential for further exploration in dynamic settings. Furthermore, the paper highlights the gap in multilingual grounding tasks, presenting an opportunity to diversify the scope and applicability of NLP grounding across languages.

Going forward, the authors propose several strategies to propel research into more integrated grounding scenarios. These include fostering dynamic grounding through conversational language learning, ambiguity resolution, and clarification questioning. They argue that these approaches can cultivate more robust, interactive systems capable of refining their understanding and improving user trust. Additionally, expanding existing datasets to accommodate multiple languages can initiate broader inclusivity and richness in NLP grounding tasks.

In conclusion, the paper calls on the NLP community to evolve grounding practices by addressing missing dimensions, thus offering a path toward a more coherent and effective grounding methodology. It suggests critical shifts in focus—from static to dynamic grounding, from lateral to longitudinal benchmarking across purviews, and from currently dominant media constraints to less explored ones—to ensure that future NLP systems can genuinely enrich interactive language technologies.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 1 like about this paper.