DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 22

1
Estimating the Entropy of Linguistic Distributions ...
BASE
Show details
2
On Homophony and Rényi Entropy ...
BASE
Show details
3
On Homophony and Rényi Entropy ...
BASE
Show details
4
On Homophony and Rényi Entropy ...
BASE
Show details
5
Searching for Search Errors in Neural Morphological Inflection ...
BASE
Show details
6
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
7
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
8
Conditional Poisson Stochastic Beams ...
BASE
Show details
9
Language Model Evaluation Beyond Perplexity ...
BASE
Show details
10
A surprisal--duration trade-off across and within the world's languages ...
BASE
Show details
11
Determinantal Beam Search ...
BASE
Show details
12
Is Sparse Attention more Interpretable? ...
BASE
Show details
13
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
14
A Plug-and-Play Method for Controlled Text Generation ...
BASE
Show details
15
Language Model Evaluation Beyond Perplexity ...
Meister, Clara Isabel; Cotterell, Ryan. - : ETH Zurich, 2021
BASE
Show details
16
Determinantal Beam Search ...
BASE
Show details
17
Is Sparse Attention more Interpretable? ...
Abstract: Sparse attention has been claimed to increase model interpretability under the assumption that it highlights influential inputs. Yet the attention distribution is typically over representations internal to the model rather than the inputs themselves, suggesting this assumption may not have merit. We build on the recent work exploring the interpretability of attention; we design a set of experiments to help us understand how sparsity affects our ability to use attention as an explainability tool. On three text classification tasks, we verify that only a weak relationship between inputs and co-indexed intermediate representations exists-under sparse attention and otherwise. Further, we do not find any plausible mappings from sparse attention distributions to a sparse set of influential inputs through other avenues. Rather, we observe in this setting that inducing sparsity may make it less plausible that attention can be used as a tool for understanding model behavior. ... : Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing ...
URL: https://dx.doi.org/10.3929/ethz-b-000507680
http://hdl.handle.net/20.500.11850/507680
BASE
Hide details
18
A Cognitive Regularizer for Language Modeling ...
BASE
Show details
19
A Cognitive Regularizer for Language Modeling ...
BASE
Show details
20
A Cognitive Regularizer for Language Modeling ...
BASE
Show details

Page: 1 2

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
22
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern