DE eng

Search in the Catalogues and Directories

Hits 1 – 17 of 17

1
Geographic Adaptation of Pretrained Language Models ...
BASE
Show details
2
Data for paper: "Evaluating Resource-Lean Cross-Lingual Embedding Models in Unsupervised Retrieval" ...
Litschko, Robert; Glavaš, Goran. - : Mannheim University Library, 2021
BASE
Show details
3
Crossing the Conversational Chasm: A Primer on Natural Language Processing for Multilingual Task-Oriented Dialogue Systems ...
BASE
Show details
4
On Cross-Lingual Retrieval with Multilingual Text Encoders ...
BASE
Show details
5
Evaluating Multilingual Text Encoders for Unsupervised Cross-Lingual Retrieval ...
BASE
Show details
6
AraWEAT: Multidimensional Analysis of Biases in Arabic Word Embeddings ...
BASE
Show details
7
XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning ...
BASE
Show details
8
On the Limitations of Cross-lingual Encoders as Exposed by Reference-Free Machine Translation Evaluation ...
BASE
Show details
9
Orthogonal Language and Task Adapters in Zero-Shot Cross-Lingual Transfer ...
BASE
Show details
10
From Zero to Hero: On the Limitations of Zero-Shot Cross-Lingual Transfer with Multilingual Transformers ...
BASE
Show details
11
Probing Pretrained Language Models for Lexical Semantics ...
Abstract: The success of large pretrained language models (LMs) such as BERT and RoBERTa has sparked interest in probing their representations, in order to unveil what types of knowledge they implicitly capture. While prior research focused on morphosyntactic, semantic, and world knowledge, it remains unclear to which extent LMs also derive lexical type-level knowledge from words in context. In this work, we present a systematic empirical analysis across six typologically diverse languages and five different lexical tasks, addressing the following questions: 1) How do different lexical knowledge extraction strategies (monolingual versus multilingual source LM, out-of-context versus in-context encoding, inclusion of special tokens, and layer-wise averaging) impact performance? How consistent are the observed effects across tasks and languages? 2) Is lexical knowledge stored in few parameters, or is it scattered throughout the network? 3) How do these representations fare against traditional static word vectors in ... : EMNLP 2020: Long paper ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://arxiv.org/abs/2010.05731
https://dx.doi.org/10.48550/arxiv.2010.05731
BASE
Hide details
12
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity ...
BASE
Show details
13
Do We Really Need Fully Unsupervised Cross-Lingual Embeddings? ...
BASE
Show details
14
Informing unsupervised pretraining with external linguistic knowledge
Lauscher, Anne; Vulić, Ivan; Ponti, Edoardo Maria. - : Cornell University, 2019
BASE
Show details
15
Unsupervised Cross-Lingual Information Retrieval using Monolingual Data Only ...
BASE
Show details
16
Post-Specialisation: Retrofitting Vectors of Words Unseen in Lexical Resources ...
BASE
Show details
17
A Resource-Light Method for Cross-Lingual Semantic Textual Similarity ...
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
17
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern