41 |
Manual Clustering and Spatial Arrangement of Verbs for Multilingual Evaluation and Typology Analysis ...
|
|
|
|
BASE
|
|
Show details
|
|
42 |
A Closer Look at Few-Shot Crosslingual Transfer: The Choice of Shots Matters ...
|
|
|
|
BASE
|
|
Show details
|
|
43 |
Verb Knowledge Injection for Multilingual Event Processing ...
|
|
|
|
BASE
|
|
Show details
|
|
44 |
Multi-SimLex: A Large-Scale Evaluation of Multilingual and Cross-Lingual Lexical Semantic Similarity ...
|
|
|
|
BASE
|
|
Show details
|
|
45 |
Probing Pretrained Language Models for Lexical Semantics ...
|
|
|
|
BASE
|
|
Show details
|
|
46 |
The Secret is in the Spectra: Predicting Cross-lingual Task Performance with Spectral Similarity Measures ...
|
|
|
|
BASE
|
|
Show details
|
|
47 |
SemEval-2020 Task 2: Predicting Multilingual and Cross-Lingual (Graded) Lexical Entailment ...
|
|
|
|
BASE
|
|
Show details
|
|
48 |
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity ...
|
|
|
|
Abstract:
Unsupervised pretraining models have been shown to facilitate a wide range of downstream NLP applications. These models, however, retain some of the limitations of traditional static word embeddings. In particular, they encode only the distributional knowledge available in raw text corpora, incorporated through language modeling objectives. In this work, we complement such distributional knowledge with external lexical knowledge, that is, we integrate the discrete knowledge on word-level semantic similarity into pretraining. To this end, we generalize the standard BERT model to a multi-task learning setting where we couple BERT’s masked language modeling and next sentence prediction objectives with an auxiliary task of binary word relation classification. Our experiments suggest that our “Lexically Informed” BERT (LIBERT), specialized for the word-level semantic similarity, yields better performance than the lexically blind “vanilla” BERT on several language understanding tasks. Concretely, LIBERT ...
|
|
Keyword:
Computer and Information Science; Natural Language Processing; Neural Network
|
|
URL: https://dx.doi.org/10.48448/j696-8h54 https://underline.io/lecture/6311-specializing-unsupervised-pretraining-models-for-word-level-semantic-similarity
|
|
BASE
|
|
Hide details
|
|
49 |
Cross-lingual semantic specialization via lexical relation induction ...
|
|
|
|
BASE
|
|
Show details
|
|
50 |
Adversarial propagation and zero-shot cross-lingual transfer of word vector specialization ...
|
|
|
|
BASE
|
|
Show details
|
|
51 |
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity ...
|
|
|
|
BASE
|
|
Show details
|
|
52 |
SemEval-2020 Task 2: Predicting Multilingual and Cross-Lingual (Graded) Lexical Entailment ...
|
|
|
|
BASE
|
|
Show details
|
|
53 |
Do we really need fully unsupervised cross-lingual embeddings? ...
|
|
|
|
BASE
|
|
Show details
|
|
54 |
Multi-SimLex: A Large-Scale Evaluation of Multilingual and Cross-Lingual Lexical Semantic Similarity ...
|
|
|
|
BASE
|
|
Show details
|
|
55 |
Probing Pretrained Language Models for Lexical Semantics ...
|
|
|
|
BASE
|
|
Show details
|
|
56 |
Classification-Based Self-Learning for Weakly Supervised Bilingual Lexicon Induction ...
|
|
|
|
BASE
|
|
Show details
|
|
57 |
On the relation between linguistic typology and (limitations of) multilingual language modeling ...
|
|
|
|
BASE
|
|
Show details
|
|
58 |
Improving Bilingual Lexicon Induction with Unsupervised Post-Processing of Monolingual Word Vector Spaces ...
|
|
|
|
BASE
|
|
Show details
|
|
59 |
The Secret is in the Spectra: Predicting Cross-Lingual Task Performance with Spectral Similarity Measures ...
|
|
|
|
BASE
|
|
Show details
|
|
60 |
Spatial multi-arrangement for clustering and multi-way similarity dataset construction ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|