DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5 6 7 8 9...72
Hits 81 – 100 of 1.423

81
Are VQA Systems RAD? Measuring Robustness to Augmented Data with Focused Interventions ...
BASE
Show details
82
Evidence-based Factual Error Correction ...
BASE
Show details
83
SemEval-2021 Task 6: Detection of Persuasion Techniques in Texts and Images ...
BASE
Show details
84
A DQN-based Approach to Finding Precise Evidences for Fact Verification ...
BASE
Show details
85
Embedding Time Differences in Context-sensitive Neural Networks for Learning Time to Event ...
BASE
Show details
86
A Span-based Dynamic Local Attention Model for Sequential Sentence Classification ...
BASE
Show details
87
Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection ...
BASE
Show details
88
2A: Sentiment Analysis, Stylistic Analysis, and Argument Mining #1 ...
BASE
Show details
89
On Sample Based Explanation Methods for NLP: Faithfulness, Efficiency and Semantic Evaluation ...
BASE
Show details
90
How effective is BERT without word ordering? Implications for language understanding and data privacy ...
BASE
Show details
91
GEM: Natural Language Generation, Evaluation, and Metrics - Part 4 ...
BASE
Show details
92
The statistical advantage of automatic NLG metrics at the system level ...
BASE
Show details
93
Counter-Argument Generation by Attacking Weak Premises ...
BASE
Show details
94
Supporting Cognitive and Emotional Empathic Writing of Students ...
BASE
Show details
95
What's in the Box? An Analysis of Undesirable Content in the Common Crawl Corpus ...
BASE
Show details
96
Are Pretrained Convolutions Better than Pretrained Transformers? ...
Abstract: Read paper: https://www.aclanthology.org/2021.acl-long.335 Abstract: In the era of pre-trained language models, Transformers are the de facto choice of model architectures. While recent research has shown promise in entirely convolutional, or CNN, architectures, they have not been explored using the pre-train-fine-tune paradigm. In the context of language models, are convolutional models competitive to Transformers when pre-trained? This paper investigates this research question and presents several interesting findings. Across an extensive set of experiments on 8 datasets/tasks, we find that CNN-based pre-trained models are competitive and outperform their Transformer counterpart in certain scenarios, albeit with caveats. Overall, the findings outlined in this paper suggest that conflating pre-training and architectural advances is misguided and that both advances should be considered independently. We believe our research paves the way for a healthy amount of optimism in alternative architectures. ...
Keyword: Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
URL: https://underline.io/lecture/26041-are-pretrained-convolutions-better-than-pretrained-transformersquestion
https://dx.doi.org/10.48448/y2n6-j616
BASE
Hide details
97
Evaluation Examples are not Equally Informative: How should that change NLP Leaderboards? ...
BASE
Show details
98
Beyond Offline Mapping: Learning Cross-lingual Word Embeddings through Context Anchoring ...
BASE
Show details
99
Capturing Relations between Scientific Papers: An Abstractive Model for Related Work Section Generation ...
BASE
Show details
100
Hate Speech Detection Based on Sentiment Knowledge Sharing ...
BASE
Show details

Page: 1 2 3 4 5 6 7 8 9...72

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
1.423
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern