1 |
EVI: Multilingual Spoken Dialogue Tasks and Dataset for Knowledge-Based Enrolment, Verification, and Identification ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Parameter-Efficient Neural Reranking for Cross-Lingual and Multilingual Retrieval ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of Conversational Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Parameter space factorization for zero-shot learning across tasks and languages ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
UNKs Everywhere: Adapting Multilingual Language Models to New Scripts ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Learning Domain-Specialised Representations for Cross-Lingual Biomedical Entity Linking ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Multilingual and Cross-Lingual Intent Detection from Spoken Data ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Semantic Data Set Construction from Human Clustering and Spatial Arrangement ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Parameter space factorization for zero-shot learning across tasks and languages
|
|
|
|
In: Transactions of the Association for Computational Linguistics, 9 (2021)
|
|
BASE
|
|
Show details
|
|
17 |
AM2iCo: Evaluating Word Meaning in Context across Low-Resource Languages with Adversarial Examples ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders ...
|
|
|
|
Abstract:
Anthology paper link: https://aclanthology.org/2021.emnlp-main.109/ Abstract: Pretrained Masked Language Models (MLMs) have revolutionised NLP in recent years. However, previous work has indicated that off-the-shelf MLMs are not effective as universal lexical or sentence encoders without further task-specific fine-tuning on NLI, sentence similarity, or paraphrasing tasks using annotated task data. In this work, we demonstrate that it is possible to turn MLMs into effective universal lexical and sentence encoders even without any additional data and without any supervision. We propose an extremely simple, fast and effective contrastive learning technique, termed Mirror-BERT, which converts MLMs (e.g., BERT and RoBERTa) into such encoders in 20-30 seconds without any additional external knowledge. Mirror-BERT relies on fully identical or slightly modified string pairs as positive (i.e., synonymous) fine-tuning examples, and aims to maximise their similarity during identity fine-tuning. We report huge gains ...
|
|
Keyword:
Language Models; Natural Language Processing; Semantic Evaluation; Sociolinguistics
|
|
URL: https://dx.doi.org/10.48448/vrhm-zd20 https://underline.io/lecture/37527-fast,-effective,-and-self-supervised-transforming-masked-language-models-into-universal-lexical-and-sentence-encoders
|
|
BASE
|
|
Hide details
|
|
19 |
LexFit: Lexical Fine-Tuning of Pretrained Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Verb Knowledge Injection for Multilingual Event Processing ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|