1 |
Parameter-Efficient Neural Reranking for Cross-Lingual and Multilingual Retrieval ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Crossing the Conversational Chasm: A Primer on Natural Language Processing for Multilingual Task-Oriented Dialogue Systems ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
On Cross-Lingual Retrieval with Multilingual Text Encoders ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Evaluating Multilingual Text Encoders for Unsupervised Cross-Lingual Retrieval ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
AraWEAT: Multidimensional Analysis of Biases in Arabic Word Embeddings ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
On the Limitations of Cross-lingual Encoders as Exposed by Reference-Free Machine Translation Evaluation ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Orthogonal Language and Task Adapters in Zero-Shot Cross-Lingual Transfer ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
From Zero to Hero: On the Limitations of Zero-Shot Cross-Lingual Transfer with Multilingual Transformers ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Verb Knowledge Injection for Multilingual Event Processing ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Probing Pretrained Language Models for Lexical Semantics ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity ...
|
|
|
|
Abstract:
Unsupervised pretraining models have been shown to facilitate a wide range of downstream NLP applications. These models, however, retain some of the limitations of traditional static word embeddings. In particular, they encode only the distributional knowledge available in raw text corpora, incorporated through language modeling objectives. In this work, we complement such distributional knowledge with external lexical knowledge, that is, we integrate the discrete knowledge on word-level semantic similarity into pretraining. To this end, we generalize the standard BERT model to a multi-task learning setting where we couple BERT's masked language modeling and next sentence prediction objectives with an auxiliary task of binary word relation classification. Our experiments suggest that our "Lexically Informed" BERT (LIBERT), specialized for the word-level semantic similarity, yields better performance than the lexically blind "vanilla" BERT on several language understanding tasks. Concretely, LIBERT ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://arxiv.org/abs/1909.02339 https://dx.doi.org/10.48550/arxiv.1909.02339
|
|
BASE
|
|
Hide details
|
|
14 |
Do We Really Need Fully Unsupervised Cross-Lingual Embeddings? ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
How to (Properly) Evaluate Cross-Lingual Word Embeddings: On Strong Baselines, Comparative Analyses, and Some Misconceptions ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Unsupervised Cross-Lingual Information Retrieval using Monolingual Data Only ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Adversarial Propagation and Zero-Shot Cross-Lingual Transfer of Word Vector Specialization ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Post-Specialisation: Retrofitting Vectors of Words Unseen in Lexical Resources ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
A Resource-Light Method for Cross-Lingual Semantic Textual Similarity ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|