DE eng

Search in the Catalogues and Directories

Page: 1 2 3
Hits 1 – 20 of 51

1
SOCIOFILLMORE: A Tool for Discovering Perspectives ...
BASE
Show details
2
IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation ...
Sarti, Gabriele; Nissim, Malvina. - : arXiv, 2022
BASE
Show details
3
Multilingual Pre-training with Language and Task Adaptation for Multilingual Text Style Transfer ...
BASE
Show details
4
DALC: the Dutch Abusive Language Corpus ...
BASE
Show details
5
Thank you BART! Rewarding Pre-Trained Models Improves Formality Style Transfer ...
BASE
Show details
6
Adapting Monolingual Models: Data can be Scarce when Language Similarity is High ...
BASE
Show details
7
Generic resources are what you need: Style transfer tasks without task-specific parallel training data ...
BASE
Show details
8
Adapting Monolingual Models: Data can be Scarce when Language Similarity is High ...
Abstract: Read paper: https://www.aclanthology.org/2021.findings-acl.433 Abstract: For many (minority) languages, the resources needed to train large models are not available. We investigate the performance of zero-shot transfer learning with as little data as possible, and the influence of language similarity in this process. We retrain the lexical layers of four BERT-based models using data from two low-resource target language varieties, while the Transformer layers are independently fine-tuned on a POS-tagging task in the model's source language. By combining the new lexical layers and fine-tuned Transformer layers, we achieve high task performance for both target languages. With high language similarity, 10MB of data appears sufficient to achieve substantial monolingual transfer performance. Monolingual BERT-based models generally achieve higher downstream task performance after retraining the lexical layer than multilingual BERT, even when the target language is included in the multilingual model. ...
Keyword: Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Neural Network; Semantics
URL: https://dx.doi.org/10.48448/n3bm-jx74
https://underline.io/lecture/26524-adapting-monolingual-models-data-can-be-scarce-when-language-similarity-is-high
BASE
Hide details
9
As Good as New. How to Successfully Recycle English GPT-2 to Make Models for Other Languages ...
BASE
Show details
10
Teaching NLP with Bracelets and Restaurant Menus: An Interactive Workshop for Italian Students ...
BASE
Show details
11
Teaching NLP with Bracelets and Restaurant Menus:An Interactive Workshop for Italian Students
Pannitto, Ludovica; Busso, Lucia; Combei, Claudia Roberta. - : Association for Computational Linguistics, 2021
BASE
Show details
12
What's so special about BERT's layers? A closer look at the NLP pipeline in monolingual and multilingual models ...
BASE
Show details
13
Personal-ITY: A Novel YouTube-based Corpus for Personality Prediction in Italian ...
BASE
Show details
14
Datasets and Models for Authorship Attribution on Italian Personal Writings ...
BASE
Show details
15
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender Bias ...
BASE
Show details
16
Matching Theory and Data with Personal-ITY: What a Corpus of Italian YouTube Comments Reveals About Personality ...
BASE
Show details
17
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT'S Gender Bias ...
BASE
Show details
18
As Good as New. How to Successfully Recycle English GPT-2 to Make Models for Other Languages ...
de Vries, Wietse; Nissim, Malvina. - : arXiv, 2020
BASE
Show details
19
Fair Is Better than Sensational: Man Is to Doctor as Woman Is to Doctor
In: Computational Linguistics, Vol 46, Iss 2, Pp 487-497 (2020) (2020)
BASE
Show details
20
BERTje: A Dutch BERT Model ...
BASE
Show details

Page: 1 2 3

Catalogues
1
0
4
0
0
0
0
Bibliographies
7
0
0
0
0
0
0
0
1
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
41
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern