DE eng

Search in the Catalogues and Directories

Page: 1 2 3
Hits 1 – 20 of 50

1
Probing for the Usage of Grammatical Number ...
BASE
Show details
2
Estimating the Entropy of Linguistic Distributions ...
BASE
Show details
3
A Latent-Variable Model for Intrinsic Probing ...
BASE
Show details
4
On Homophony and Rényi Entropy ...
BASE
Show details
5
Towards Zero-shot Language Modeling ...
BASE
Show details
6
Differentiable Generative Phonology ...
BASE
Show details
7
Finding Concept-specific Biases in Form--Meaning Associations ...
BASE
Show details
8
Quantifying Gender Bias Towards Politicians in Cross-Lingual Language Models ...
BASE
Show details
9
Probing as Quantifying Inductive Bias ...
BASE
Show details
10
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
11
How (Non-)Optimal is the Lexicon? ...
BASE
Show details
12
Disambiguatory Signals are Stronger in Word-initial Positions ...
BASE
Show details
13
A Cognitive Regularizer for Language Modeling ...
BASE
Show details
14
Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing ...
BASE
Show details
15
On the Relationships Between the Grammatical Genders of Inanimate Nouns and Their Co-Occurring Adjectives and Verbs ...
BASE
Show details
16
Investigating Cross-Linguistic Adjective Ordering Tendencies with a Latent-Variable Model ...
BASE
Show details
17
SIGMORPHON 2020 Shared Task 0: Typologically Diverse Morphological Inflection ...
BASE
Show details
18
Intrinsic Probing through Dimension Selection ...
BASE
Show details
19
SIGTYP 2020 Shared Task: Prediction of Typological Features ...
BASE
Show details
20
Information-Theoretic Probing for Linguistic Structure ...
Abstract: The success of neural networks on a diverse set of NLP tasks has led researchers to question how much these networks actually ``know'' about natural language. Probes are a natural way of assessing this. When probing, a researcher chooses a linguistic task and trains a supervised model to predict annotations in that linguistic task from the network's learned representations. If the probe does well, the researcher may conclude that the representations encode knowledge related to the task. A commonly held belief is that using simpler models as probes is better; the logic is that simpler models will identify linguistic structure, but not learn the task itself. We propose an information-theoretic operationalization of probing as estimating mutual information that contradicts this received wisdom: one should always select the highest performing probe one can, even if it is more complex, since it will result in a tighter estimate, and thus reveal more of the linguistic information inherent in the representation. ... : Accepted for publication at ACL 2020. This is the camera ready version. Code available in https://github.com/rycolab/info-theoretic-probing ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences; Machine Learning cs.LG
URL: https://dx.doi.org/10.48550/arxiv.2004.03061
https://arxiv.org/abs/2004.03061
BASE
Hide details

Page: 1 2 3

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
50
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern