1 |
Probing Classifiers: Promises, Shortcomings, and Advances ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
On the Pitfalls of Analyzing Individual Neurons in Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Debiasing Methods in Natural Language Understanding Make Bias More Accessible ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Similarity Analysis of Contextual Word Representation Models ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Probing the Probing Paradigm: Does Probing Accuracy Entail Task Relevance? ...
|
|
|
|
Abstract:
Although neural models have achieved impressive results on several NLP benchmarks, little is understood about the mechanisms they use to perform language tasks. Thus, much recent attention has been devoted to analyzing the sentence representations learned by neural encoders, through the lens of `probing' tasks. However, to what extent was the information encoded in sentence representations, as discovered through a probe, actually used by the model to perform its task? In this work, we examine this probing paradigm through a case study in Natural Language Inference, showing that models can learn to encode linguistic properties even if they are not needed for the task on which the model was trained. We further identify that pretrained word embeddings play a considerable role in encoding these properties rather than the training task itself, highlighting the importance of careful controls when designing probing experiments. Finally, through a set of controlled synthetic tasks, we demonstrate models can encode ... : EACL 2021 ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://dx.doi.org/10.48550/arxiv.2005.00719 https://arxiv.org/abs/2005.00719
|
|
BASE
|
|
Hide details
|
|
7 |
The Sensitivity of Language Models and Humans to Winograd Schema Perturbations ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Analyzing Individual Neurons in Pre-trained Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
On the Linguistic Representational Power of Neural Machine Translation Models
|
|
|
|
In: Computational Linguistics, Vol 46, Iss 1, Pp 1-52 (2020) (2020)
|
|
BASE
|
|
Show details
|
|
11 |
Exploring Compositional Architectures and Word Vector Representations for Prepositional Phrase Attachment
|
|
|
|
In: MIT Press (2019)
|
|
BASE
|
|
Show details
|
|
12 |
On the Linguistic Representational Power of Neural Machine Translation Models ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
On Adversarial Removal of Hypothesis-only Bias in Natural Language Inference ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Improving Neural Language Models by Segmenting, Attending, and Predicting the Future ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
On Adversarial Removal of Hypothesis-only Bias in Natural Language Inference
|
|
|
|
BASE
|
|
Show details
|
|
16 |
On Evaluating the Generalization of LSTM Models in Formal Languages
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference
|
|
|
|
BASE
|
|
Show details
|
|
19 |
On Evaluating the Generalization of LSTM Models in Formal Languages
|
|
|
|
In: Proceedings of the Society for Computation in Linguistics (2019)
|
|
BASE
|
|
Show details
|
|
20 |
Analysis Methods in Neural Language Processing: A Survey
|
|
|
|
In: Transactions of the Association for Computational Linguistics, Vol 7, Pp 49-72 (2019) (2019)
|
|
BASE
|
|
Show details
|
|
|
|