2000 character limit reached
Using Distributional Thesaurus Embedding for Co-hyponymy Detection
Published 24 Feb 2020 in cs.CL | (2002.11506v1)
Abstract: Discriminating lexical relations among distributionally similar words has always been a challenge for NLP community. In this paper, we investigate whether the network embedding of distributional thesaurus can be effectively utilized to detect co-hyponymy relations. By extensive experiments over three benchmark datasets, we show that the vector representation obtained by applying node2vec on distributional thesaurus outperforms the state-of-the-art models for binary classification of co-hyponymy vs. hypernymy, as well as co-hyponymy vs. meronymy, by huge margins.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.