1 |
MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Context vs Target Word: Quantifying Biases in Lexical Semantic Datasets ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
AM2iCo: Evaluating Word Meaning in Context across Low-Resource Languages with Adversarial Examples ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
AM2iCo: Evaluating Word Meaning in Context across Low-Resource Languages with Adversarial Examples ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Improving Machine Translation of Rare and Unseen Word Senses ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning
|
|
|
|
BASE
|
|
Show details
|
|
11 |
XCOPA: A multilingual dataset for causal commonsense reasoning
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Investigating cross-lingual alignment methods for contextualized embeddings with Token-level evaluation ...
|
|
|
|
Abstract:
In this paper, we present a thorough investigation on methods that align pre-trained contextualized embeddings into shared cross-lingual context-aware embedding space, providing strong reference benchmarks for future context-aware crosslingual models. We propose a novel and challenging task, Bilingual Token-level Sense Retrieval (BTSR). It specifically evaluates the accurate alignment of words with the same meaning in cross-lingual non-parallel contexts, currently not evaluated by existing tasks such as Bilingual Contextual Word Similarity and Sentence Retrieval. We show how the proposed BTSR task highlights the merits of different alignment methods. In particular, we find that using context average type-level alignment is effective in transferring monolingual contextualized embeddings cross-lingually especially in non-parallel contexts, and at the same time improves the monolingual space. Furthermore, aligning independently trained models yields better performance than aligning multilingual embeddings with ... : Peterhouse College Studentship; ERC Consolidator Grant LEXICAL ...
|
|
URL: https://dx.doi.org/10.17863/cam.44042 https://www.repository.cam.ac.uk/handle/1810/297000
|
|
BASE
|
|
Hide details
|
|
13 |
Second-order contexts from lexical substitutes for few-shot learning of word representations ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Second-order contexts from lexical substitutes for few-shot learning of word representations
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Investigating cross-lingual alignment methods for contextualized embeddings with Token-level evaluation
|
|
|
|
BASE
|
|
Show details
|
|
|
|