DE eng

Search in the Catalogues and Directories

Hits 1 – 19 of 19

1
LeBenchmark: A Reproducible Framework for Assessing Self-Supervised Representation Learning from Speech
In: INTERSPEECH 2021: Conference of the International Speech Communication Association ; https://hal.archives-ouvertes.fr/hal-03317730 ; INTERSPEECH 2021: Conference of the International Speech Communication Association, Aug 2021, Brno, Czech Republic (2021)
BASE
Show details
2
LeBenchmark: A Reproducible Framework for Assessing Self-Supervised Representation Learning from Speech
In: INTERSPEECH 2021: ; INTERSPEECH 2021: Conference of the International Speech Communication Association ; https://hal.archives-ouvertes.fr/hal-03317730 ; INTERSPEECH 2021: Conference of the International Speech Communication Association, Aug 2021, Brno, Czech Republic (2021)
BASE
Show details
3
LeBenchmark: A Reproducible Framework for Assessing Self-Supervised Representation Learning from Speech
In: INTERSPEECH 2021: ; INTERSPEECH 2021: Conference of the International Speech Communication Association ; https://hal.archives-ouvertes.fr/hal-03317730 ; INTERSPEECH 2021: Conference of the International Speech Communication Association, Aug 2021, Brno, Czech Republic (2021)
BASE
Show details
4
Lightweight Adapter Tuning for Multilingual Speech Translation
In: The Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021) ; https://hal.archives-ouvertes.fr/hal-03294912 ; The Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021), Aug 2021, Bangkok (Virtual), Thailand (2021)
Abstract: International audience ; Adapter modules were recently introduced as an efficient alternative to fine-tuning in NLP. Adapter tuning consists in freezing pretrained parameters of a model and injecting lightweight modules between layers, resulting in the addition of only a small number of taskspecific trainable parameters. While adapter tuning was investigated for multilingual neural machine translation, this paper proposes a comprehensive analysis of adapters for multilingual speech translation (ST). Starting from different pre-trained models (a multilingual ST trained on parallel data or a multilingual BART (mBART) trained on non-parallel multilingual data), we show that adapters can be used to: (a) efficiently specialize ST to specific language pairs with a low extra cost in terms of parameters, and (b) transfer from an automatic speech recognition (ASR) task and an mBART pretrained model to a multilingual ST task. Experiments show that adapter tuning offer competitive results to full fine-tuning, while being much more parameter-efficient.
Keyword: [INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI]
URL: https://hal.archives-ouvertes.fr/hal-03294912/document
https://hal.archives-ouvertes.fr/hal-03294912
https://hal.archives-ouvertes.fr/hal-03294912/file/adapting_multilingual_st-acl2021.pdf
BASE
Hide details
5
ON-TRAC' systems for the IWSLT 2021 low-resource speech translation and multilingual speech translation shared tasks
In: Proceedings of the 18th International Conference on Spoken Language Translation, ; International Conference on Spoken Language Translation (IWSLT) ; https://hal.archives-ouvertes.fr/hal-03298854 ; International Conference on Spoken Language Translation (IWSLT), Aug 2021, Bangkok (virtual), Thailand. ⟨10.18653/v1/2021.iwslt-1.20⟩ (2021)
BASE
Show details
6
Lightweight Adapter Tuning for Multilingual Speech Translation ...
Le, Hang; Pino, Juan; Wang, Changhan. - : arXiv, 2021
BASE
Show details
7
Lightweight Adapter Tuning for Multilingual Speech Translation ...
BASE
Show details
8
FlauBERT: Unsupervised Language Model Pre-training for French
In: Proceedings of the 12th Language Resources and Evaluation Conference ; LREC ; https://hal.archives-ouvertes.fr/hal-02890258 ; LREC, 2020, Marseille, France (2020)
BASE
Show details
9
FlauBERT : Unsupervised Language Model Pre-training for French ; FlauBERT : des modèles de langue contextualisés pré-entraînés pour le français
In: Actes de la 6e conférence conjointe Journées d'Études sur la Parole (JEP, 33e édition), Traitement Automatique des Langues Naturelles (TALN, 27e édition), Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RÉCITAL, 22e édition). Volume 2 : Traitement Automatique des Langues Naturelles ; 6e conférence conjointe Journées d'Études sur la Parole (JEP, 33e édition), Traitement Automatique des Langues Naturelles (TALN, 27e édition), Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RÉCITAL, 22e édition). Volume 2 : Traitement Automatique des Langues Naturelles ; https://hal.archives-ouvertes.fr/hal-02784776 ; 6e conférence conjointe Journées d'Études sur la Parole (JEP, 33e édition), Traitement Automatique des Langues Naturelles (TALN, 27e édition), Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RÉCITAL, 22e édition). Volume 2 : Traitement Automatique des Langues Naturelles, Jun 2020, Nancy, France. pp.268-278 (2020)
BASE
Show details
10
Dual-decoder Transformer for Joint Automatic Speech Recognition and Multilingual Speech Translation
In: COLING 2020 (long paper) ; https://hal.archives-ouvertes.fr/hal-02991564 ; COLING 2020 (long paper), Dec 2020, Virtual, Spain (2020)
BASE
Show details
11
Dual-decoder Transformer for Joint Automatic Speech Recognition and Multilingual Speech Translation ...
Le, Hang; Pino, Juan; Wang, Changhan. - : arXiv, 2020
BASE
Show details
12
Dual-decoder Transformer for Joint Automatic Speech Recognition and Multilingual Speech Translation ...
Le, Hang; Pino, Juan; Changhan Wang. - : Zenodo, 2020
BASE
Show details
13
Dual-decoder Transformer for Joint Automatic Speech Recognition and Multilingual Speech Translation ...
Le, Hang; Pino, Juan; Changhan Wang. - : Zenodo, 2020
BASE
Show details
14
Dual-decoder Transformer for Joint Automatic Speech Recognition and Multilingual Speech Translation ...
Le, Hang; Pino, Juan; Changhan Wang. - : Zenodo, 2020
BASE
Show details
15
Dual-decoder Transformer for Joint Automatic Speech Recognition and Multilingual Speech Translation ...
Le, Hang; Pino, Juan; Changhan Wang. - : Zenodo, 2020
BASE
Show details
16
Dual-decoder Transformer for Joint Automatic Speech Recognition and Multilingual Speech Translation ...
Le, Hang; Pino, Juan; Changhan Wang. - : Zenodo, 2020
BASE
Show details
17
Emergence of Separable Manifolds in Deep Language Representations ...
BASE
Show details
18
Dual-decoder Transformer for Joint Automatic Speech Recognition and Multilingual Speech Translation ...
BASE
Show details
19
Influences on smartphone adoption by language learners
Doan, Nguyen Thi Le Hang. - : CALL-EJ, 2018
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
19
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern