1 |
Winoground: Probing Vision and Language Models for Visio-Linguistic Compositionality ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
ANLIzing the Adversarial Natural Language Inference Dataset
|
|
|
|
In: Proceedings of the Society for Computation in Linguistics (2022)
|
|
BASE
|
|
Show details
|
|
3 |
Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
FLAVA: A Foundational Language And Vision Alignment Model ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
I like fish, especially dolphins: Addressing Contradictions in Dialogue Modeling ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Gradient-based Adversarial Attacks against Text Transformers ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
On the Efficacy of Adversarial Data Collection for Question Answering: Results from a Large-Scale Randomized Study ...
|
|
|
|
Abstract:
Read paper: https://www.aclanthology.org/2021.acl-long.517 Abstract: In adversarial data collection (ADC), a human workforce interacts with a model in real time, attempting to produce examples that elicit incorrect predictions. Researchers hope that models trained on these more challenging datasets will rely less on superficial patterns, and thus be less brittle. However, despite ADC's intuitive appeal, it remains unclear when training on adversarial datasets produces more robust models. In this paper, we conduct a large-scale controlled study focused on question answering, assigning workers at random to compose questions either (i) adversarially (with a model in the loop); or (ii) in the standard fashion (without a model). Across a variety of models and datasets, we find that models trained on adversarial data usually perform better on other adversarial datasets but worse on a diverse collection of out-of-domain evaluation sets. Finally, we provide a qualitative analysis of adversarial (vs standard) data, ...
|
|
Keyword:
Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
|
|
URL: https://underline.io/lecture/25741-on-the-efficacy-of-adversarial-data-collection-for-question-answering-results-from-a-large-scale-randomized-study https://dx.doi.org/10.48448/9j1t-4330
|
|
BASE
|
|
Hide details
|
|
11 |
Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Deep Artificial Neural Networks Reveal a Distributed Cortical Network Encoding Propositional Sentence-Level Meaning
|
|
|
|
In: J Neurosci (2021)
|
|
BASE
|
|
Show details
|
|
13 |
Emergent Linguistic Phenomena in Multi-Agent Communication Games ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Inferring concept hierarchies from text corpora via hyperbolic embeddings ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Inferring concept hierarchies from text corpora via hyperbolic embeddings
|
|
|
|
In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019) (2019)
|
|
BASE
|
|
Show details
|
|
18 |
Visually Grounded and Textual Semantic Models Differentially Decode Brain Activity Associated with Concrete and Abstract Nouns ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Virtual Embodiment: A Scalable Long-Term Strategy for Artificial Intelligence Research ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
HyperLex: A Large-Scale Evaluation of Graded Lexical Entailment ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|