Papers
arxiv:1910.07181

BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Model Performance

Published on Oct 16, 2019
Authors:
,

Abstract

BERTRAM, an architecture based on BERT, improves the representation of rare words by enabling interaction between word forms and contexts, leading to performance gains in NLP tasks.

AI-generated summary

Pretraining deep language models has led to large performance gains in NLP. Despite this success, Schick and Sch\"utze (2020) recently showed that these models struggle to understand rare words. For static word embeddings, this problem has been addressed by separately learning representations for rare words. In this work, we transfer this idea to pretrained language models: We introduce BERTRAM, a powerful architecture based on BERT that is capable of inferring high-quality embeddings for rare words that are suitable as input representations for deep language models. This is achieved by enabling the surface form and contexts of a word to interact with each other in a deep architecture. Integrating BERTRAM into BERT leads to large performance increases due to improved representations of rare and medium frequency words on both a rare word probing task and three downstream tasks.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/1910.07181 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/1910.07181 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/1910.07181 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.