DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...7
Hits 1 – 20 of 129

1
Delving Deeper into Cross-lingual Visual Question Answering ...
BASE
Show details
2
Combating Temporal Drift in Crisis with Adapted Embeddings ...
Stowe, Kevin; Gurevych, Iryna. - : arXiv, 2021
BASE
Show details
3
Annotation Curricula to Implicitly Train Non-Expert Annotators ...
BASE
Show details
4
Smelting Gold and Silver for Improved Multilingual AMR-to-Text Generation ...
BASE
Show details
5
xGQA: Cross-Lingual Visual Question Answering ...
BASE
Show details
6
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models ...
BASE
Show details
7
Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning ...
BASE
Show details
8
Metaphor Generation with Conceptual Mappings ...
BASE
Show details
9
GPL: Generative Pseudo Labeling for Unsupervised Domain Adaptation of Dense Retrieval ...
BASE
Show details
10
Modeling Global and Local Node Contexts for Text Generation from Knowledge Graphs
In: EISSN: 2307-387X ; Transactions of the Association for Computational Linguistics ; https://hal.archives-ouvertes.fr/hal-03020314 ; Transactions of the Association for Computational Linguistics, The MIT Press, 2020, 8, ⟨10.1162/tacl_a_00332⟩ (2020)
BASE
Show details
11
Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation ...
Reimers, Nils; Gurevych, Iryna. - : arXiv, 2020
BASE
Show details
12
How to Probe Sentence Embeddings in Low-Resource Languages: On Structural Design Choices for Probing Task Evaluation ...
BASE
Show details
13
MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer ...
BASE
Show details
14
Predicting the Humorousness of Tweets Using Gaussian Process Preference Learning ...
BASE
Show details
15
How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models ...
Abstract: In this work, we provide a systematic and comprehensive empirical comparison of pretrained multilingual language models versus their monolingual counterparts with regard to their monolingual task performance. We study a set of nine typologically diverse languages with readily available pretrained monolingual models on a set of five diverse monolingual downstream tasks. We first aim to establish, via fair and controlled comparisons, if a gap between the multilingual and the corresponding monolingual representation of that language exists, and subsequently investigate the reason for any performance difference. To disentangle conflating factors, we train new monolingual models on the same data, with monolingually and multilingually trained tokenizers. We find that while the pretraining data size is an important factor, a designated monolingual tokenizer plays an equally important role in the downstream performance. Our results show that languages that are adequately represented in the multilingual model's ... : ACL 2021 ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://arxiv.org/abs/2012.15613
https://dx.doi.org/10.48550/arxiv.2012.15613
BASE
Hide details
16
UNKs Everywhere: Adapting Multilingual Language Models to New Scripts ...
BASE
Show details
17
PuzzLing Machines: A Challenge on Learning From Small Data ...
BASE
Show details
18
A Matter of Framing: The Impact of Linguistic Formalism on Probing Results ...
Kuznetsov, Ilia; Gurevych, Iryna. - : arXiv, 2020
BASE
Show details
19
Modeling Global and Local Node Contexts for Text Generation from Knowledge Graphs ...
BASE
Show details
20
Empowering Active Learning to Jointly Optimize System and User Demands ...
BASE
Show details

Page: 1 2 3 4 5...7

Catalogues
5
3
2
0
11
0
0
Bibliographies
3
0
3
0
0
0
13
0
1
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
91
0
0
1
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern