DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5 6...9
Hits 21 – 40 of 163

21
A Bayesian Framework for Information-Theoretic Probing ...
BASE
Show details
22
Classifying Dyads for Militarized Conflict Analysis ...
BASE
Show details
23
Higher-order Derivatives of Weighted Finite-state Machines ...
BASE
Show details
24
On Finding the K-best Non-projective Dependency Trees ...
BASE
Show details
25
A surprisal--duration trade-off across and within the world's languages ...
BASE
Show details
26
Determinantal Beam Search ...
BASE
Show details
27
Is Sparse Attention more Interpretable? ...
Abstract: Read paper: https://www.aclanthology.org/2021.acl-short.17 Abstract: Sparse attention has been claimed to increase model interpretability under the assumption that it highlights influential inputs. Yet the attention distribution is typically over representations internal to the model rather than the inputs themselves, suggesting this assumption may not have merit. We build on the recent work exploring the interpretability of attention; we design a set of experiments to help us understand how sparsity affects our ability to use attention as an explainability tool. On three text classification tasks, we verify that only a weak relationship between inputs and co-indexed intermediate representations exists—under sparse attention and otherwise. Further, we do not find any plausible mappings from sparse attention distributions to a sparse set of influential inputs through other avenues. Rather, we observe in this setting that inducing sparsity may make it less plausible that attention can be used as a tool for ...
Keyword: Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
URL: https://underline.io/lecture/25435-is-sparse-attention-more-interpretablequestion
https://dx.doi.org/10.48448/90jh-y922
BASE
Hide details
28
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
29
A Plug-and-Play Method for Controlled Text Generation ...
BASE
Show details
30
Language Model Evaluation Beyond Perplexity ...
Meister, Clara Isabel; Cotterell, Ryan. - : ETH Zurich, 2021
BASE
Show details
31
What About the Precedent: An Information-Theoretic Analysis of Common Law ...
BASE
Show details
32
Searching for More Efficient Dynamic Programs ...
Vieira, Tim; Cotterell, Ryan; Eisner, Jason. - : ETH Zurich, 2021
BASE
Show details
33
Modeling the Unigram Distribution ...
BASE
Show details
34
Determinantal Beam Search ...
BASE
Show details
35
Examining the Inductive Bias of Neural Language Models with Artificial Languages ...
White, Jennifer C.; Cotterell, Ryan. - : ETH Zurich, 2021
BASE
Show details
36
Finding Concept-specific Biases in Form–Meaning Associations ...
BASE
Show details
37
Differentiable subset pruning of transformer heads ...
Li, Jiaoda; Cotterell, Ryan; Sachan, Mrinmaya. - : ETH Zurich, 2021
BASE
Show details
38
On Finding the K-best Non-projective Dependency Trees ...
Zmigrod, Ran; Vieira, Tim; Cotterell, Ryan. - : ETH Zurich, 2021
BASE
Show details
39
Efficient computation of expectations under spanning tree distributions ...
Zmigrod, Ran; Vieira, Tim; Cotterell, Ryan. - : ETH Zurich, 2021
BASE
Show details
40
Multimodal pretraining unmasked: A meta-analysis and a unified framework of vision-and-language berts ...
BASE
Show details

Page: 1 2 3 4 5 6...9

Catalogues
0
0
0
0
0
0
1
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
162
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern