1 |
Delving Deeper into Cross-lingual Visual Question Answering ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Cross-Lingual Dialogue Dataset Creation via Outline-Based Generation ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Improving Word Translation via Two-Stage Contrastive Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Crossing the Conversational Chasm: A Primer on Natural Language Processing for Multilingual Task-Oriented Dialogue Systems ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Learning Domain-Specialised Representations for Cross-Lingual Biomedical Entity Linking ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Combining Deep Generative Models and Multi-lingual Pretraining for Semi-supervised Document Classification ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Context vs Target Word: Quantifying Biases in Lexical Semantic Datasets ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
AM2iCo: Evaluating Word Meaning in Context across Low-Resource Languages with Adversarial Examples ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders ...
|
|
|
|
Abstract:
Pretrained Masked Language Models (MLMs) have revolutionised NLP in recent years. However, previous work has indicated that off-the-shelf MLMs are not effective as universal lexical or sentence encoders without further task-specific fine-tuning on NLI, sentence similarity, or paraphrasing tasks using annotated task data. In this work, we demonstrate that it is possible to turn MLMs into effective universal lexical and sentence encoders even without any additional data and without any supervision. We propose an extremely simple, fast and effective contrastive learning technique, termed Mirror-BERT, which converts MLMs (e.g., BERT and RoBERTa) into such encoders in 20-30 seconds without any additional external knowledge. Mirror-BERT relies on fully identical or slightly modified string pairs as positive (i.e., synonymous) fine-tuning examples, and aims to maximise their similarity during identity fine-tuning. We report huge gains over off-the-shelf MLMs with Mirror-BERT in both lexical-level and sentence-level ... : EMNLP 2021 camera-ready version ...
|
|
Keyword:
Artificial Intelligence cs.AI; Computation and Language cs.CL; FOS Computer and information sciences; Machine Learning cs.LG
|
|
URL: https://arxiv.org/abs/2104.08027 https://dx.doi.org/10.48550/arxiv.2104.08027
|
|
BASE
|
|
Hide details
|
|
12 |
XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Emergent Communication Pretraining for Few-Shot Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
A Closer Look at Few-Shot Crosslingual Transfer: The Choice of Shots Matters ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Verb Knowledge Injection for Multilingual Event Processing ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Multi-SimLex: A Large-Scale Evaluation of Multilingual and Cross-Lingual Lexical Semantic Similarity ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Probing Pretrained Language Models for Lexical Semantics ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
The Secret is in the Spectra: Predicting Cross-lingual Task Performance with Spectral Similarity Measures ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Do We Really Need Fully Unsupervised Cross-Lingual Embeddings? ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|