Papers
Topics
Authors
Recent
Search
2000 character limit reached

Self-Enhancing Multi-filter Sequence-to-Sequence Model

Published 25 Sep 2021 in cs.CL | (2109.12399v3)

Abstract: Representation learning is important for solving sequence-to-sequence problems in natural language processing. Representation learning transforms raw data into vector-form representations while preserving their features. However, data with significantly different features leads to heterogeneity in their representations, which may increase the difficulty of convergence. We design a multi-filter encoder-decoder model to resolve the heterogeneity problem in sequence-to-sequence tasks. The multi-filter model divides the latent space into subspaces using a clustering algorithm and trains a set of decoders (filters) in which each decoder only concentrates on the features from its corresponding subspace. As for the main contribution, we design a self-enhancing mechanism that uses a reinforcement learning algorithm to optimize the clustering algorithm without additional training data. We run semantic parsing and machine translation experiments to indicate that the proposed model can outperform most benchmarks by at least 5\%. We also empirically show the self-enhancing mechanism can improve performance by over 10\% and provide evidence to demonstrate the positive correlation between the model's performance and the latent space clustering.

Citations (1)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.