DE eng

Search in the Catalogues and Directories

Hits 1 – 17 of 17

1
First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT
In: https://hal.inria.fr/hal-03161685 ; 2021 (2021)
BASE
Show details
2
Can Multilingual Language Models Transfer to an Unseen Dialect? A Case Study on North African Arabizi
In: https://hal.inria.fr/hal-03161677 ; 2021 (2021)
BASE
Show details
3
First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT
In: EACL 2021 - The 16th Conference of the European Chapter of the Association for Computational Linguistics ; https://hal.inria.fr/hal-03239087 ; EACL 2021 - The 16th Conference of the European Chapter of the Association for Computational Linguistics, Apr 2021, Kyiv / Virtual, Ukraine ; https://2021.eacl.org/ (2021)
BASE
Show details
4
When Being Unseen from mBERT is just the Beginning: Handling New Languages With Multilingual Language Models
In: NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies ; https://hal.inria.fr/hal-03251105 ; NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Jun 2021, Mexico City, Mexico (2021)
BASE
Show details
5
PAGnol: An Extra-Large French Generative Model
In: https://hal.inria.fr/hal-03540159 ; [Research Report] LightON. 2021 (2021)
Abstract: Access to large pre-trained models of varied architectures, in many different languages, is central to the democratization of NLP. We introduce PAGnol, a collection of French GPT models. Using scaling laws, we efficiently train PAGnol-XL (1.5B parameters) with the same computational budget as CamemBERT, a model 13 times smaller. PAGnol-XL is the largest model trained to date for the French language. We plan to train increasingly large and performing versions of PAGnol, exploring the capabilities of French extreme-scale models. For this first release, we focus on the pre-training and scaling calculations underlining PAGnol. We fit a scaling law for compute for the French language, and compare it with its English counterpart. We find the pre-training dataset significantly conditions the quality of the outputs, with common datasets such as OSCAR leading to low-quality offensive text. We evaluate our models on discriminative and generative tasks in French, comparing to other state-of-the-art French and multilingual models, and reaching the state of the art in the abstract summarization task. Our research was conducted on the public GENCI Jean Zay supercomputer, and our models up to the Large are made publicly available.
Keyword: [INFO.INFO-TT]Computer Science [cs]/Document and Text Processing
URL: https://hal.inria.fr/hal-03540159
BASE
Hide details
6
Synthetic Data Augmentation for Zero-Shot Cross-Lingual Question Answering
In: https://hal.inria.fr/hal-03109187 ; 2021 (2021)
BASE
Show details
7
Noisy UGC Translation at the Character Level: Revisiting Open-Vocabulary Capabilities and Robustness of Char-Based Models
In: W-NUT 2021 - 7th Workshop on Noisy User-generated Text (colocated with EMNLP 2021) ; https://hal.inria.fr/hal-03540174 ; W-NUT 2021 - 7th Workshop on Noisy User-generated Text (colocated with EMNLP 2021), Association for computational linguistics, Nov 2021, Punta Cana, Dominican Republic (2021)
BASE
Show details
8
Understanding the Impact of UGC Specificities on Translation Quality
In: W-NUT 2021 - Seventh Workshop on Noisy User-generated Text (colocated with EMNLP 2021) ; https://hal.inria.fr/hal-03540175 ; W-NUT 2021 - Seventh Workshop on Noisy User-generated Text (colocated with EMNLP 2021), association for computational linguistics, Nov 2021, Punta Cana, Dominican Republic (2021)
BASE
Show details
9
Challenging the Semi-Supervised VAE Framework for Text Classification
In: Second Workshop on Insights from Negative Results in NLP (colocated with EMNLP) ; https://hal.inria.fr/hal-03540081 ; Second Workshop on Insights from Negative Results in NLP (colocated with EMNLP), Nov 2021, Punta Cana, Dominican Republic ; https://insights-workshop.github.io/2021/ (2021)
BASE
Show details
10
Deep Sequoia corpus - PARSEME-FR corpus - FrSemCor
BASE
Show details
11
IWPT 2021 Shared Task Data and System Outputs
Zeman, Daniel; Bouma, Gosse; Seddah, Djamé. - : Universal Dependencies Consortium, 2021
BASE
Show details
12
Universal Dependencies 2.9
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
13
Universal Dependencies 2.8.1
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
14
Universal Dependencies 2.8
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
15
Can Character-based Language Models Improve Downstream Task Performance in Low-Resource and Noisy Language Scenarios? ...
BASE
Show details
16
First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT ...
BASE
Show details
17
Synthetic Data Augmentation for Zero-Shot Cross-Lingual Question Answering ...
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
17
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern