Papers
Topics
Authors
Recent
Search
2000 character limit reached

BERT-Assisted Semantic Annotation Correction for Emotion-Related Questions

Published 2 Apr 2022 in cs.CL | (2204.00916v1)

Abstract: Annotated data have traditionally been used to provide the input for training a supervised ML model. However, current pre-trained ML models for NLP contain embedded linguistic information that can be used to inform the annotation process. We use the BERT neural LLM to feed information back into an annotation task that involves semantic labelling of dialog behavior in a question-asking game called Emotion Twenty Questions (EMO20Q). First we describe the background of BERT, the EMO20Q data, and assisted annotation tasks. Then we describe the methods for fine-tuning BERT for the purpose of checking the annotated labels. To do this, we use the paraphrase task as a way to check that all utterances with the same annotation label are classified as paraphrases of each other. We show this method to be an effective way to assess and revise annotations of textual user data with complex, utterance-level semantic labels.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.