1 |
Multilingual Language Model Adaptive Fine-Tuning: A Study on African Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Preventing author profiling through zero-shot multilingual back-translation
|
|
|
|
In: 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP) ; https://hal.inria.fr/hal-03350906 ; 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), Nov 2021, Punta Cana, Dominica (2021)
|
|
BASE
|
|
Show details
|
|
3 |
On the effect of normalization layers on Differentially Private training of deep Neural networks
|
|
|
|
In: https://hal.inria.fr/hal-03475600 ; 2021 (2021)
|
|
BASE
|
|
Show details
|
|
4 |
Adapting Language Models When Training on Privacy-Transformed Data
|
|
|
|
In: INTERSPEECH 2021 ; https://hal.inria.fr/hal-03189354 ; 2021 (2021)
|
|
BASE
|
|
Show details
|
|
5 |
Do Acoustic Word Embeddings Capture Phonological Similarity? An Empirical Study ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
ANEA: Distant Supervision for Low-Resource Named Entity Recognition ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Integrating Unsupervised Data Generation into Self-Supervised Neural Machine Translation for Low-Resource Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Preventing Author Profiling through Zero-Shot Multilingual Back-Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Modeling Profanity and Hate Speech in Social Media with Semantic Subspaces ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Exploring the Potential of Lexical Paraphrases for Mitigating Noise-Induced Comprehension Errors ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
On the Correlation of Context-Aware Language Models With the Intelligibility of Polish Target Words to Czech Readers
|
|
|
|
In: Front Psychol (2021)
|
|
Abstract:
This contribution seeks to provide a rational probabilistic explanation for the intelligibility of words in a genetically related language that is unknown to the reader, a phenomenon referred to as intercomprehension. In this research domain, linguistic distance, among other factors, was proved to correlate well with the mutual intelligibility of individual words. However, the role of context for the intelligibility of target words in sentences was subject to very few studies. To address this, we analyze data from web-based experiments in which Czech (CS) respondents were asked to translate highly predictable target words at the final position of Polish sentences. We compare correlations of target word intelligibility with data from 3-g language models (LMs) to their correlations with data obtained from context-aware LMs. More specifically, we evaluate two context-aware LM architectures: Long Short-Term Memory (LSTMs) that can, theoretically, take infinitely long-distance dependencies into account and Transformer-based LMs which can access the whole input sequence at the same time. We investigate how their use of context affects surprisal and its correlation with intelligibility.
|
|
Keyword:
Psychology
|
|
URL: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8278517/ https://doi.org/10.3389/fpsyg.2021.662277
|
|
BASE
|
|
Hide details
|
|
13 |
Transfer learning and distant supervision for multilingual Transformer models: A study on African languages
|
|
|
|
In: 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) ; https://hal.inria.fr/hal-03350901 ; 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Nov 2020, Punta Cana, Dominica (2020)
|
|
BASE
|
|
Show details
|
|
14 |
Distant supervision and noisy label learning for low resource named entity recognition: A study on Hausa and Yorùbá
|
|
|
|
In: ICLR Workshops (AfricaNLP & PML4DC 2020) ; https://hal.archives-ouvertes.fr/hal-03359111 ; ICLR Workshops (AfricaNLP & PML4DC 2020), Apr 2020, Addis Ababa, Ethiopia (2020)
|
|
BASE
|
|
Show details
|
|
15 |
Transfer Learning and Distant Supervision for Multilingual Transformer Models: A Study on African Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Rediscovering the Slavic Continuum in Representations Emerging from Neural Models of Spoken Language Identification ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
On the Interplay Between Fine-tuning and Sentence-level Probing for Linguistic Knowledge in Pre-trained Transformers ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
A Closer Look at Linguistic Knowledge in Masked Language Models: The Case of Relative Clauses in American English ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
A Closer Look at Linguistic Knowledge in Masked Language Models: The Case of Relative Clauses in American English ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|