DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 28

1
On Homophony and Rényi Entropy ...
BASE
Show details
2
On Homophony and Rényi Entropy ...
BASE
Show details
3
On Homophony and Rényi Entropy ...
BASE
Show details
4
Finding Concept-specific Biases in Form--Meaning Associations ...
BASE
Show details
5
Quantifying Gender Bias Towards Politicians in Cross-Lingual Language Models ...
BASE
Show details
6
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
7
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
8
Modeling the Unigram Distribution ...
BASE
Show details
9
A Bayesian Framework for Information-Theoretic Probing ...
BASE
Show details
10
A surprisal--duration trade-off across and within the world's languages ...
BASE
Show details
11
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
12
What About the Precedent: An Information-Theoretic Analysis of Common Law ...
BASE
Show details
13
Modeling the Unigram Distribution ...
BASE
Show details
14
Finding Concept-specific Biases in Form–Meaning Associations ...
BASE
Show details
15
How (Non-)Optimal is the Lexicon? ...
BASE
Show details
16
Disambiguatory Signals are Stronger in Word-initial Positions ...
BASE
Show details
17
Modeling the Unigram Distribution
In: Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (2021)
BASE
Show details
18
What About the Precedent: An Information-Theoretic Analysis of Common Law
In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021)
BASE
Show details
19
Finding Concept-specific Biases in Form–Meaning Associations
In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021)
BASE
Show details
20
A Non-Linear Structural Probe
In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021)
Abstract: Probes are models devised to investigate the encoding of knowledge—e.g. syntactic structure—in contextual representations. Probes are often designed for simplicity, which has led to restrictions on probe design that may not allow for the full exploitation of the structure of encoded information; one such restriction is linearity. We examine the case of a structural probe (Hewitt and Manning, 2019), which aims to investigate the encoding of syntactic structure in contextual representations through learning only linear transformations. By observing that the structural probe learns a metric, we are able to kernelize it and develop a novel non-linear variant with an identical number of parameters. We test on 6 languages and find that the radial-basis function (RBF) kernel, in conjunction with regularization, achieves a statistically significant improvement over the baseline in all languages—implying that at least part of the syntactic knowledge is encoded non-linearly. We conclude by discussing how the RBF kernel resembles BERT’s self-attention layers and speculate that this resemblance leads to the RBF-based probe’s stronger performance.
URL: https://hdl.handle.net/20.500.11850/518983
https://doi.org/10.3929/ethz-b-000518983
BASE
Hide details

Page: 1 2

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
28
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern