DE eng

Search in the Catalogues and Directories

Page: 1 2 3
Hits 1 – 20 of 59

1
Universal Dependencies and Semantics for English and Hebrew Child-directed Speech
In: Proceedings of the Society for Computation in Linguistics (2022)
BASE
Show details
2
Do Infants Really Learn Phonetic Categories?
In: EISSN: 2470-2986 ; Open Mind ; https://hal.archives-ouvertes.fr/hal-03550830 ; Open Mind, MIT Press, 2021, 5, pp.113-131. ⟨10.1162/opmi_a_00046⟩ (2021)
BASE
Show details
3
Early phonetic learning without phonetic categories -- Insights from large-scale simulations on realistic input
In: ISSN: 0027-8424 ; EISSN: 1091-6490 ; Proceedings of the National Academy of Sciences of the United States of America ; https://hal.archives-ouvertes.fr/hal-03070566 ; Proceedings of the National Academy of Sciences of the United States of America , National Academy of Sciences, 2021, 118 (7), pp.e2001844118. ⟨10.1073/pnas.2001844118⟩ (2021)
BASE
Show details
4
Black or White but never neutral: How readers perceive identity from yellow or skin-toned emoji ...
BASE
Show details
5
A phonetic model of non-native spoken word processing ...
BASE
Show details
6
Cross-linguistically Consistent Semantic and Syntactic Annotation of Child-directed Speech ...
BASE
Show details
7
Do Infants Really Learn Phonetic Categories?
In: Open Mind (Camb) (2021)
BASE
Show details
8
Early phonetic learning without phonetic categories: Insights from large-scale simulations on realistic input
In: Proc Natl Acad Sci U S A (2021)
BASE
Show details
9
Multilingual acoustic word embedding models for processing zero-resource languages ...
BASE
Show details
10
Improved acoustic word embeddings for zero-resource languages using multilingual transfer ...
BASE
Show details
11
Analyzing autoencoder-based acoustic word embeddings ...
BASE
Show details
12
Inflecting when there's no majority: Limitations of encoder-decoder neural networks as cognitive models for German plurals ...
BASE
Show details
13
Evaluating computational models of infant phonetic learning across languages ...
BASE
Show details
14
Multilingual and Unsupervised Subword Modeling for Zero-Resource Languages
In: http://infoscience.epfl.ch/record/277105 (2020)
BASE
Show details
15
On understanding character-level models for representing morphology
Vania, Clara. - : The University of Edinburgh, 2020
Abstract: Morphology is the study of how words are composed of smaller units of meaning (morphemes). It allows humans to create, memorize, and understand words in their language. To process and understand human languages, we expect our computational models to also learn morphology. Recent advances in neural network models provide us with models that compose word representations from smaller units like word segments, character n-grams, or characters. These so-called subword unit models do not explicitly model morphology yet they achieve impressive performance across many multilingual NLP tasks, especially on languages with complex morphological processes. This thesis aims to shed light on the following questions: (1) What do subword unit models learn about morphology? (2) Do we still need prior knowledge about morphology? (3) How do subword unit models interact with morphological typology? First, we systematically compare various subword unit models and study their performance across language typologies. We show that models based on characters are particularly effective because they learn orthographic regularities which are consistent with morphology. To understand which aspects of morphology are not captured by these models, we compare them with an oracle with access to explicit morphological analysis. We show that in the case of dependency parsing, character-level models are still poor in representing words with ambiguous analyses. We then demonstrate how explicit modeling of morphology is helpful in such cases. Finally, we study how character-level models perform in low resource, cross-lingual NLP scenarios, whether they can facilitate cross-linguistic transfer of morphology across related languages. While we show that cross-lingual character-level models can improve low-resource NLP performance, our analysis suggests that it is mostly because of the structural similarities between languages and we do not yet find any strong evidence of crosslinguistic transfer of morphology. This thesis presents a careful, in-depth study and analyses of character-level models and their relation to morphology, providing insights and future research directions on building morphologically-aware computational NLP models.
Keyword: character-level models; dependency parsing; morphemes; morphology; natural language processing; NLP
URL: https://doi.org/10.7488/era/49
https://hdl.handle.net/1842/36742
BASE
Hide details
16
Methods for morphology learning in low(er)-resource scenarios
Bergmanis, Toms. - : The University of Edinburgh, 2020
BASE
Show details
17
Discovering and analysing lexical variation in social media text
Shoemark, Philippa Jane. - : The University of Edinburgh, 2020
BASE
Show details
18
Are we there yet? Encoder-decoder neural networks as cognitive models of English past tense inflection ...
BASE
Show details
19
Analyzing ASR pretraining for low-resource speech-to-text translation ...
BASE
Show details
20
Low-resource speech translation
Bansal, Sameer. - : The University of Edinburgh, 2019
BASE
Show details

Page: 1 2 3

Catalogues
0
0
5
0
0
0
0
Bibliographies
4
0
0
0
0
0
0
0
1
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
52
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern