DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 22

1
Estimating the Entropy of Linguistic Distributions ...
BASE
Show details
2
On Homophony and Rényi Entropy ...
BASE
Show details
3
On Homophony and Rényi Entropy ...
Abstract: Homophony's widespread presence in natural languages is a controversial topic. Recent theories of language optimality have tried to justify its prevalence, despite its negative effects on cognitive processing time; e.g., Piantadosi et al. (2012) argued homophony enables the reuse of efficient wordforms and is thus beneficial for languages. This hypothesis has recently been challenged by Trott and Bergen (2020), who posit that good wordforms are more often homophonous simply because they are more phonotactically probable. In this paper, we join in on the debate. We first propose a new information-theoretic quantification of a language's homophony: the sample Rényi entropy. Then, we use this quantification to revisit Trott and Bergen's claims. While their point is theoretically sound, a specific methodological issue in their experiments raises doubts about their results. After addressing this issue, we find no clear pressure either towards or against homophony -- a much more nuanced result than either ... : Accepted for publication in EMNLP 2021. Code available in https://github.com/rycolab/homophony-as-renyi-entropy ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://arxiv.org/abs/2109.13766
https://dx.doi.org/10.48550/arxiv.2109.13766
BASE
Hide details
4
On Homophony and Rényi Entropy ...
BASE
Show details
5
Searching for Search Errors in Neural Morphological Inflection ...
BASE
Show details
6
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
7
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
8
Conditional Poisson Stochastic Beams ...
BASE
Show details
9
Language Model Evaluation Beyond Perplexity ...
BASE
Show details
10
A surprisal--duration trade-off across and within the world's languages ...
BASE
Show details
11
Determinantal Beam Search ...
BASE
Show details
12
Is Sparse Attention more Interpretable? ...
BASE
Show details
13
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
14
A Plug-and-Play Method for Controlled Text Generation ...
BASE
Show details
15
Language Model Evaluation Beyond Perplexity ...
Meister, Clara Isabel; Cotterell, Ryan. - : ETH Zurich, 2021
BASE
Show details
16
Determinantal Beam Search ...
BASE
Show details
17
Is Sparse Attention more Interpretable? ...
BASE
Show details
18
A Cognitive Regularizer for Language Modeling ...
BASE
Show details
19
A Cognitive Regularizer for Language Modeling ...
BASE
Show details
20
A Cognitive Regularizer for Language Modeling ...
BASE
Show details

Page: 1 2

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
22
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern