1 |
Welcome to the Modern World of Pronouns: Identity-Inclusive Natural Language Processing beyond Gender ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
MultiCite: Modeling realistic citations requires moving beyond the single-sentence single-label setting ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of Conversational Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
AraWEAT: Multidimensional Analysis of Biases in Arabic Word Embeddings ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Rhetoric, Logic, and Dialectic: Advancing Theory-based Argument Quality Assessment in Natural Language Processing ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Rhetoric, Logic, and Dialectic: Advancing Theory-based Argument Quality Assessment in Natural Language Processing ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Creating a Domain-diverse Corpus for Theory-based Argument Quality Assessment ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
From Zero to Hero: On the Limitations of Zero-Shot Cross-Lingual Transfer with Multilingual Transformers ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
From Zero to Hero: On the Limitations of Zero-Shot Cross-Lingual Transfer with Multilingual Transformers ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity ...
|
|
|
|
Abstract:
Unsupervised pretraining models have been shown to facilitate a wide range of downstream NLP applications. These models, however, retain some of the limitations of traditional static word embeddings. In particular, they encode only the distributional knowledge available in raw text corpora, incorporated through language modeling objectives. In this work, we complement such distributional knowledge with external lexical knowledge, that is, we integrate the discrete knowledge on word-level semantic similarity into pretraining. To this end, we generalize the standard BERT model to a multi-task learning setting where we couple BERT’s masked language modeling and next sentence prediction objectives with an auxiliary task of binary word relation classification. Our experiments suggest that our “Lexically Informed” BERT (LIBERT), specialized for the word-level semantic similarity, yields better performance than the lexically blind “vanilla” BERT on several language understanding tasks. Concretely, LIBERT ...
|
|
Keyword:
Computer and Information Science; Natural Language Processing; Neural Network
|
|
URL: https://dx.doi.org/10.48448/j696-8h54 https://underline.io/lecture/6311-specializing-unsupervised-pretraining-models-for-word-level-semantic-similarity
|
|
BASE
|
|
Hide details
|
|
13 |
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity
|
|
Lauscher, Anne; Vulic, Ivan; Ponti, Edoardo. - : International Committee on Computational Linguistics, 2020. : https://www.aclweb.org/anthology/2020.coling-main.118, 2020. : Proceedings of the 28th International Conference on Computational Linguistics (COLING 2020), 2020
|
|
BASE
|
|
Show details
|
|
15 |
From Zero to Hero: On the Limitations of Zero-Shot Cross-Lingual Transfer with Multilingual Transformers
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Specializing unsupervised pretraining models for word-level semantic similarity
|
|
|
|
BASE
|
|
Show details
|
|
17 |
AraWEAT: Multidimensional analysis of biases in Arabic word embeddings
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Common sense or world knowledge? Investigating adapter-based knowledge injection into pretrained transformers
|
|
|
|
BASE
|
|
Show details
|
|
19 |
From zero to hero: On the limitations of zero-shot language transfer with multilingual transformers
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|