DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 22

1
Winoground: Probing Vision and Language Models for Visio-Linguistic Compositionality ...
BASE
Show details
2
ANLIzing the Adversarial Natural Language Inference Dataset
In: Proceedings of the Society for Computation in Linguistics (2022)
BASE
Show details
3
Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection ...
BASE
Show details
4
FLAVA: A Foundational Language And Vision Alignment Model ...
BASE
Show details
5
I like fish, especially dolphins: Addressing Contradictions in Dialogue Modeling ...
BASE
Show details
6
Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation ...
BASE
Show details
7
Reservoir Transformers ...
BASE
Show details
8
Gradient-based Adversarial Attacks against Text Transformers ...
BASE
Show details
9
DynaSent: A Dynamic Benchmark for Sentiment Analysis ...
BASE
Show details
10
On the Efficacy of Adversarial Data Collection for Question Answering: Results from a Large-Scale Randomized Study ...
BASE
Show details
11
Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little ...
Abstract: Anthology paper link: https://aclanthology.org/2021.emnlp-main.230/ Abstract: A possible explanation for the impressive performance of masked language model (MLM) pre-training is that such models have learned to represent the syntactic structures prevalent in classical NLP pipelines. In this paper, we propose a different explanation: MLMs succeed on downstream tasks almost entirely due to their ability to model higher-order word co-occurrence statistics. To demonstrate this, we pre-train MLMs on sentences with randomly shuffled word order, and show that these models still achieve high accuracy after fine-tuning on many downstream tasks -- including on tasks specifically designed to be challenging for models that ignore word order. Our models perform surprisingly well according to some parametric syntactic probes, indicating possible deficiencies in how we test representations for syntactic information. Overall, our results show that purely distributional information largely explains the success of ...
Keyword: Computational Linguistics; Language Models; Machine Learning; Machine Learning and Data Mining; Natural Language Processing
URL: https://dx.doi.org/10.48448/3r0a-fw32
https://underline.io/lecture/37423-masked-language-modeling-and-the-distributional-hypothesis-order-word-matters-pre-training-for-little
BASE
Hide details
12
Deep Artificial Neural Networks Reveal a Distributed Cortical Network Encoding Propositional Sentence-Level Meaning
In: J Neurosci (2021)
BASE
Show details
13
Emergent Linguistic Phenomena in Multi-Agent Communication Games ...
BASE
Show details
14
Inferring concept hierarchies from text corpora via hyperbolic embeddings ...
BASE
Show details
15
Inferring concept hierarchies from text corpora via hyperbolic embeddings
In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019) (2019)
BASE
Show details
16
Countering Language Drift via Visual Grounding ...
BASE
Show details
17
Emergent Translation in Multi-Agent Communication ...
BASE
Show details
18
Visually Grounded and Textual Semantic Models Differentially Decode Brain Activity Associated with Concrete and Abstract Nouns ...
Anderson, AJ; Kiela, Douwe; Clark, Stephen. - : Apollo - University of Cambridge Repository, 2017
BASE
Show details
19
Virtual Embodiment: A Scalable Long-Term Strategy for Artificial Intelligence Research ...
BASE
Show details
20
HyperLex: A Large-Scale Evaluation of Graded Lexical Entailment ...
BASE
Show details

Page: 1 2

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
22
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern