DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5 6 7 8 9...52
Hits 81 – 100 of 1.029

81
Meta Distant Transfer Learning for Pre-trained Language Models ...
BASE
Show details
82
How to Train BERT with an Academic Budget ...
BASE
Show details
83
Improving Span Representation for Domain-adapted Coreference Resolution ...
BASE
Show details
84
Temporal Adaptation of BERT and Performance on Downstream Document Classification: Insights from Social Media ...
BASE
Show details
85
An Empirical Study on Multiple Information Sources for Zero-Shot Fine-Grained Entity Typing ...
BASE
Show details
86
Looking for Confirmations: An Effective and Human-Like Visual Dialogue Strategy ...
BASE
Show details
87
MRF-Chat: Improving Dialogue with Markov Random Fields ...
BASE
Show details
88
Exploring Metaphoric Paraphrase Generation ...
BASE
Show details
89
CrossVQA: Scalably Generating Benchmarks for Systematically Testing VQA Generalization ...
BASE
Show details
90
Latent Hatred: A Benchmark for Understanding Implicit Hate Speech ...
BASE
Show details
91
HypMix: Hyperbolic Interpolative Data Augmentation ...
BASE
Show details
92
STaCK: Sentence Ordering with Temporal Commonsense Knowledge ...
BASE
Show details
93
ExplaGraphs: An Explanation Graph Generation Task for Structured Commonsense Reasoning ...
BASE
Show details
94
Weakly supervised discourse segmentation for multiparty oral conversations ...
BASE
Show details
95
Searching for an Effective Defender: Benchmarking Defense against Adversarial Word Substitution ...
BASE
Show details
96
Progressively Guide to Attend: An Iterative Alignment Framework for Temporal Sentence Grounding ...
BASE
Show details
97
Knowledge Enhanced Fine-Tuning for Better Handling Unseen Entities in Dialogue Generation ...
BASE
Show details
98
STANKER: Stacking Network based on Level-grained Attention-masked BERT for Rumor Detection on Social Media ...
Abstract: Anthology paper link: https://aclanthology.org/2021.emnlp-main.269/ Abstract: Rumor detection on social media puts pre-trained language models (LMs), such as BERT, and auxiliary features, such as comments, into use. However, on the one hand, rumor detection datasets in Chinese companies with comments are rare; on the other hand, intensive interaction of attention on Transformer-based models like BERT may hinder performance improvement. To alleviate these problems, we build a new Chinese microblog dataset named Weibo20 by collecting posts and associated comments from Sina Weibo and propose a new ensemble named STANKER (Stacking neTwork bAsed-on atteNtion-masKed BERT). STANKER adopts two level-grained attention-masked BERT (LGAM-BERT) models as base encoders. Unlike the original BERT, our new LGAM-BERT model takes comments as important auxiliary features and masks co-attention between posts and comments on lower-layers. Experiments on Weibo20 and three existing social media datasets showed that STANKER ...
Keyword: Computational Linguistics; Language Models; Machine Learning; Machine Learning and Data Mining; Natural Language Processing
URL: https://dx.doi.org/10.48448/pt79-6q74
https://underline.io/lecture/37337-stanker-stacking-network-based-on-level-grained-attention-masked-bert-for-rumor-detection-on-social-media
BASE
Hide details
99
IndoNLG: Benchmark and Resources for Evaluating Indonesian Natural Language Generation ...
BASE
Show details
100
SYSML: StYlometry with Structure and Multitask Learning: Implications for Darknet Forum Migrant Analysis ...
BASE
Show details

Page: 1 2 3 4 5 6 7 8 9...52

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
1.029
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern