2000 character limit reached
Multilingual Named Entity Recognition Using Pretrained Embeddings, Attention Mechanism and NCRF
Published 21 Jun 2019 in cs.CL | (1906.09978v1)
Abstract: In this paper we tackle multilingual named entity recognition task. We use the BERT LLM as embeddings with bidirectional recurrent network, attention, and NCRF on the top. We apply multilingual BERT only as embedder without any fine-tuning. We test out model on the dataset of the BSNLP shared task, which consists of texts in Bulgarian, Czech, Polish and Russian languages.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.