DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 39

1
Probing for the Usage of Grammatical Number ...
BASE
Show details
2
On Homophony and Rényi Entropy ...
BASE
Show details
3
On Homophony and Rényi Entropy ...
BASE
Show details
4
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
5
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
6
Modeling the Unigram Distribution ...
BASE
Show details
7
A Bayesian Framework for Information-Theoretic Probing ...
BASE
Show details
8
A surprisal--duration trade-off across and within the world's languages ...
BASE
Show details
9
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
10
What About the Precedent: An Information-Theoretic Analysis of Common Law ...
BASE
Show details
11
Modeling the Unigram Distribution ...
BASE
Show details
12
Finding Concept-specific Biases in Form–Meaning Associations ...
BASE
Show details
13
Modeling the Unigram Distribution
In: Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (2021)
BASE
Show details
14
What About the Precedent: An Information-Theoretic Analysis of Common Law
In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021)
BASE
Show details
15
Finding Concept-specific Biases in Form–Meaning Associations
In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021)
BASE
Show details
16
A Non-Linear Structural Probe
In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021)
BASE
Show details
17
Disambiguatory Signals are Stronger in Word-initial Positions
In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume (2021)
Abstract: Psycholinguistic studies of human word processing and lexical access provide ample evidence of the preferred nature of word-initial versus word-final segments, e.g., in terms of attention paid by listeners (greater) or the likelihood of reduction by speakers (lower). This has led to the conjecture—as in Wedel et al. (2019b), but common elsewhere—that languages have evolved to provide more information earlier in words than later. Information-theoretic methods to establish such tendencies in lexicons have suffered from several methodological shortcomings that leave open the question of whether this high word-initial informativeness is actually a property of the lexicon or simply an artefact of the incremental nature of recognition. In this paper, we point out the confounds in existing methods for comparing the informativeness of segments early in the word versus later in the word, and present several new measures that avoid these confounds. When controlling for these confounds, we still find evidence across hundreds of languages that indeed there is a cross-linguistic tendency to front-load information in words.
URL: https://hdl.handle.net/20.500.11850/518997
https://doi.org/10.3929/ethz-b-000518997
BASE
Hide details
18
How (Non-)Optimal is the Lexicon?
In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021)
BASE
Show details
19
A Bayesian Framework for Information-Theoretic Probing
In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021)
BASE
Show details
20
How (Non-)Optimal is the Lexicon? ...
NAACL 2021 2021; Blasi, Damián; Cotterell, Ryan. - : Underline Science Inc., 2021
BASE
Show details

Page: 1 2

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
39
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern