1 |
RETRIEVING SPEAKER INFORMATION FROM PERSONALIZED ACOUSTIC MODELS FOR SPEECH RECOGNITION
|
|
|
|
In: IEEE ICASSP 2022 ; https://hal.archives-ouvertes.fr/hal-03539741 ; IEEE ICASSP 2022, 2022, Singapour, Singapore (2022)
|
|
BASE
|
|
Show details
|
|
2 |
From FreEM to D'AlemBERT ; From FreEM to D'AlemBERT: a Large Corpus and a Language Model for Early Modern French
|
|
|
|
In: Proceedings of the 13th Language Resources and Evaluation Conference ; https://hal.inria.fr/hal-03596653 ; Proceedings of the 13th Language Resources and Evaluation Conference, European Language Resources Association, Jun 2022, Marseille, France (2022)
|
|
BASE
|
|
Show details
|
|
3 |
A gentle introduction to Girard's Transcendental Syntax for the linear logician
|
|
|
|
In: https://hal.archives-ouvertes.fr/hal-02977750 ; 2022 (2022)
|
|
BASE
|
|
Show details
|
|
4 |
Learning and controlling the source-filter representation of speech with a variational autoencoder
|
|
|
|
In: https://hal.archives-ouvertes.fr/hal-03650569 ; 2022 (2022)
|
|
BASE
|
|
Show details
|
|
5 |
Hippocampal ensembles represent sequential relationships among an extended sequence of nonspatial events.
|
|
|
|
In: Nature communications, vol 13, iss 1 (2022)
|
|
BASE
|
|
Show details
|
|
6 |
Changes in the midst of a construction network: a diachronic construction grammar approach to complex prepositions denoting internal location
|
|
|
|
In: ISSN: 0936-5907 ; EISSN: 1613-3641 ; Cognitive Linguistics ; https://halshs.archives-ouvertes.fr/halshs-03637056 ; Cognitive Linguistics, De Gruyter, 2022, ⟨10.1515/cog-2021-0128⟩ (2022)
|
|
BASE
|
|
Show details
|
|
7 |
Changes in the midst of a construction network: a diachronic construction grammar approach to complex prepositions denoting internal location
|
|
|
|
In: ISSN: 0936-5907 ; EISSN: 1613-3641 ; Cognitive Linguistics ; https://halshs.archives-ouvertes.fr/halshs-03637056 ; Cognitive Linguistics, De Gruyter, In press, ⟨10.1515/cog-2021-0128⟩ (2022)
|
|
BASE
|
|
Show details
|
|
8 |
Le modèle Transformer: un « couteau suisse » pour le traitement automatique des langues
|
|
|
|
In: Techniques de l'Ingenieur ; https://hal.archives-ouvertes.fr/hal-03619077 ; Techniques de l'Ingenieur, Techniques de l'ingénieur, 2022, ⟨10.51257/a-v1-in195⟩ ; https://www.techniques-ingenieur.fr/base-documentaire/innovation-th10/innovations-en-electronique-et-tic-42257210/transformer-des-reseaux-de-neurones-pour-le-traitement-automatique-des-langues-in195/ (2022)
|
|
BASE
|
|
Show details
|
|
9 |
Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost
|
|
|
|
In: ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics ; https://hal.archives-ouvertes.fr/hal-03613101 ; ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, May 2022, Dublin, Ireland (2022)
|
|
BASE
|
|
Show details
|
|
10 |
Imputing out-of-vocabulary embeddings with LOVE makes language models robust with little cost
|
|
|
|
In: ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics ; https://hal.archives-ouvertes.fr/hal-03613101 ; ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, May 2022, Dublin, Ireland (2022)
|
|
BASE
|
|
Show details
|
|
11 |
Structured, flexible, and robust: comparing linguistic plans and explanations generated by humans and large language models ...
|
|
|
|
Abstract:
How much can be learned about the structure of thinking from the statistics of language alone? Large language models -- neural models trained on next-word prediction tasks over large corpuses of text -- have made striking advances in modeling the statistical distribution of language. Sufficiently large corpuses contain language in which humans describe their beliefs and intentions, their goals and plans, and their stories about occurrences in real and imaginary worlds. Richly structured cognitive processes underlie this language that we produce; however, can such structure be captured when modeling distributional co-occurance of words alone? Is language modeling alone sufficiently flexible, accurate, and robust enough to generate language for novel, out-of-distribution queries, or are model-based approaches needed? In this study, we compare human and large-language-model performance on two domains which draw on structured, model-based thinking: 1) goal-based planning, and 2) explanation generation for causal ...
|
|
Keyword:
Artificial Intelligence and Robotics; Computer Sciences; GPT-3; Language Models; Natural Language Processing; Physical Sciences and Mathematics; Planning
|
|
URL: https://dx.doi.org/10.17605/osf.io/cy72b https://osf.io/cy72b/
|
|
BASE
|
|
Hide details
|
|
12 |
From bag-of-words towards natural language: adapting topic models to avoid stop word removal ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
A Collection of Classroom Instruction ... : A Collection of Classroom Instruction ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Biodiversity: how big is our global biodiversity debt and what can we do about it? ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Bayesian data analysis in the phonetic sciences: A tutorial introduction ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
How Cognitive Abilities May Support Children’s Bilingual Literacy Development in a Multilingual Society ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
On the Transferability of Pre-trained Language Models for Low-Resource Programming Languages ...
|
|
Chen, Fuxiang. - : Federated Research Data Repository / dépôt fédéré de données de recherche, 2022
|
|
BASE
|
|
Show details
|
|
|
|