1 |
Multilingual Unsupervised Sentence Simplification
|
|
|
|
In: https://hal.inria.fr/hal-03109299 ; 2021 (2021)
|
|
BASE
|
|
Show details
|
|
2 |
Text Generation with and without Retrieval ; Génération de textes basés sur la connaissance avec et sans recherche
|
|
|
|
In: https://hal.univ-lorraine.fr/tel-03542634 ; Computer Science [cs]. Université de Lorraine, 2021. English. ⟨NNT : 2021LORR0164⟩ (2021)
|
|
BASE
|
|
Show details
|
|
3 |
The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Findings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Alternative Input Signals Ease Transfer in Multilingual Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages ...
|
|
Ebrahimi, Abteen; Mager, Manuel; Oncevay, Arturo; Chaudhary, Vishrav; Chiruzzo, Luis; Fan, Angela; Ortega, John; Ramos, Ricardo; Rios, Annette; Meza-Ruiz, Ivan; Giménez-Lugo, Gustavo A.; Mager, Elisabeth; Neubig, Graham; Palmer, Alexis; Coto-Solano, Rolando; Vu, Ngoc Thang; Kann, Katharina. - : arXiv, 2021
|
|
Abstract:
Pretrained multilingual models are able to perform cross-lingual transfer in a zero-shot setting, even for languages unseen during pretraining. However, prior work evaluating performance on unseen languages has largely been limited to low-level, syntactic tasks, and it remains unclear if zero-shot learning of high-level, semantic tasks is possible for unseen languages. To explore this question, we present AmericasNLI, an extension of XNLI (Conneau et al., 2018) to 10 indigenous languages of the Americas. We conduct experiments with XLM-R, testing multiple zero-shot and translation-based approaches. Additionally, we explore model adaptation via continued pretraining and provide an analysis of the dataset by considering hypothesis-only models. We find that XLM-R's zero-shot performance is poor for all 10 languages, with an average performance of 38.62%. Continued pretraining offers improvements, with an average accuracy of 44.05%. Surprisingly, training on poorly translated data by far outperforms all other ... : Accepted to ACL 2022 ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://arxiv.org/abs/2104.08726 https://dx.doi.org/10.48550/arxiv.2104.08726
|
|
BASE
|
|
Hide details
|
|
9 |
Multilingual AMR-to-Text Generation
|
|
|
|
In: 2020 Conference on Empirical Methods in Natural Language Processing ; https://hal.archives-ouvertes.fr/hal-02999676 ; 2020 Conference on Empirical Methods in Natural Language Processing, Nov 2020, Punta Cana, Dominican Republic (2020)
|
|
BASE
|
|
Show details
|
|
10 |
Augmenting Transformers with KNN-Based Composite Memory for Dialog
|
|
|
|
In: EISSN: 2307-387X ; Transactions of the Association for Computational Linguistics ; https://hal.archives-ouvertes.fr/hal-02999678 ; Transactions of the Association for Computational Linguistics, The MIT Press, In press, ⟨10.1162/tacl_a_00356⟩ ; https://transacl.org/index.php/tacl (2020)
|
|
BASE
|
|
Show details
|
|
11 |
Multilingual Translation with Extensible Multilingual Pretraining and Finetuning ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Beyond English-Centric Multilingual Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
MUSS: Multilingual Unsupervised Sentence Simplification by Mining Paraphrases ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|