Papers
Topics
Authors
Recent
Search
2000 character limit reached

The performance evaluation of Multi-representation in the Deep Learning models for Relation Extraction Task

Published 17 Dec 2019 in cs.CL | (1912.08290v1)

Abstract: Single implementing, concatenating, adding or replacing of the representations has yielded significant improvements on many NLP tasks. Mainly in Relation Extraction where static, contextualized and others representations that are capable of explaining word meanings through the linguistic features that these incorporates. In this work addresses the question of how is improved the relation extraction using different types of representations generated by pretrained language representation models. We benchmarked our approach using popular word representation models, replacing and concatenating static, contextualized and others representations of hand-extracted features. The experiments show that representation is a crucial element to choose when DL approach is applied. Word embeddings from Flair and BERT can be well interpreted by a deep learning model for RE task, and replacing static word embeddings with contextualized word representations could lead to significant improvements. While, the hand-created representations requires is time-consuming and not is ensure a improve in combination with others representations.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.