Papers
Topics
Authors
Recent
Search
2000 character limit reached

Neural Correction Model for Open-Domain Named Entity Recognition

Published 13 Sep 2019 in cs.CL, cs.IR, and cs.LG | (1909.06058v2)

Abstract: Named Entity Recognition (NER) plays an important role in a wide range of natural language processing tasks, such as relation extraction, question answering, etc. However, previous studies on NER are limited to particular genres, using small manually-annotated or large but low-quality datasets. Meanwhile, previous datasets for open-domain NER, built using distant supervision, suffer from low precision, recall and ratio of annotated tokens (RAT). In this work, to address the low precision and recall problems, we first utilize DBpedia as the source of distant supervision to annotate abstracts from Wikipedia and design a neural correction model trained with a human-annotated NER dataset, DocRED, to correct the false entity labels. In this way, we build a large and high-quality dataset called AnchorNER and then train various models with it. To address the low RAT problem of previous datasets, we introduce a multi-task learning method to exploit the context information. We evaluate our methods on five NER datasets and our experimental results show that models trained with AnchorNER and our multi-task learning method obtain state-of-the-art performances in the open-domain setting.

Citations (6)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.