DE eng

Search in the Catalogues and Directories

Hits 1 – 7 of 7

1
Establishing a New State-of-the-Art for French Named Entity Recognition
In: LREC 2020 - 12th Language Resources and Evaluation Conference ; https://hal.inria.fr/hal-02617950 ; LREC 2020 - 12th Language Resources and Evaluation Conference, May 2020, Marseille, France ; http://www.lrec-conf.org (2020)
BASE
Show details
2
Modelling Etymology in LMF/TEI: The Grande Dicionário Houaiss da Língua Portuguesa Dictionary as a Use Case
In: LREC 2020 - 12th Language Resources and Evaluation Conference ; https://hal.inria.fr/hal-02618067 ; LREC 2020 - 12th Language Resources and Evaluation Conference, May 2020, Marseille, France ; http://www.lrec-conf.org (2020)
BASE
Show details
3
CamemBERT: a Tasty French Language Model
In: ACL 2020 - 58th Annual Meeting of the Association for Computational Linguistics ; https://hal.inria.fr/hal-02889805 ; ACL 2020 - 58th Annual Meeting of the Association for Computational Linguistics, Jul 2020, Seattle / Virtual, United States. ⟨10.18653/v1/2020.acl-main.645⟩ (2020)
Abstract: International audience ; Pretrained language models are now ubiquitous in Natural Language Processing. Despite their success, most available models have either been trained on English data or on the con-catenation of data in multiple languages. This makes practical use of such models-in all languages except English-very limited. In this paper, we investigate the feasibility of training monolingual Transformer-based language models for other languages, taking French as an example and evaluating our language models on part-of-speech tagging, dependency parsing, named entity recognition and natural language inference tasks. We show that the use of web crawled data is preferable to the use of Wikipedia data. More surprisingly, we show that a relatively small web crawled dataset (4GB) leads to results that are as good as those obtained using larger datasets (130+GB). Our best performing model CamemBERT reaches or improves the state of the art in all four downstream tasks.
Keyword: [INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI]; [INFO.INFO-CL]Computer Science [cs]/Computation and Language [cs.CL]
URL: https://hal.inria.fr/hal-02889805/file/ACL_2020___CamemBERT__a_Tasty_French_Language_Model-6.pdf
https://hal.inria.fr/hal-02889805
https://hal.inria.fr/hal-02889805/document
https://doi.org/10.18653/v1/2020.acl-main.645
BASE
Hide details
4
A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages
In: ACL 2020 - 58th Annual Meeting of the Association for Computational Linguistics ; https://hal.inria.fr/hal-02863875 ; ACL 2020 - 58th Annual Meeting of the Association for Computational Linguistics, Jul 2020, Seattle / Virtual, United States. ⟨10.18653/v1/2020.acl-main.156⟩ ; https://acl2020.org (2020)
BASE
Show details
5
Building, Encoding, and Annotating a Corpus of Parliamentary Debates in XML-TEI: A Cross-Linguistic Account
In: https://halshs.archives-ouvertes.fr/halshs-03097333 ; 2020 (2020)
BASE
Show details
6
Modelling Etymology in LMF/TEI ; The Grande Dicionário Houaiss da Língua Portuguesa Dictionary as a Use Case
Khan, Fahad; Romary, Laurent; Salgado, Ana. - : European Language Resources Association (ELRA), 2020
BASE
Show details
7
A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages ...
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
7
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern