DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 28

1
How does the pre-training objective affect what large language models learn about linguistic properties? ...
BASE
Show details
2
Automatic Identification and Classification of Bragging in Social Media ...
BASE
Show details
3
Analyzing Online Political Advertisements ...
BASE
Show details
4
Modeling the Severity of Complaints in Social Media ...
Jin, Mali; Aletras, Nikolaos. - : arXiv, 2021
BASE
Show details
5
Translation Error Detection as Rationale Extraction ...
BASE
Show details
6
Knowledge Distillation for Quality Estimation ...
BASE
Show details
7
Frustratingly Simple Pretraining Alternatives to Masked Language Modeling ...
BASE
Show details
8
Analyzing Online Political Advertisements ...
BASE
Show details
9
Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification ...
Abstract: Read paper: https://www.aclanthology.org/2021.acl-long.40 Abstract: Neural network architectures in natural language processing often use attention mechanisms to produce probability distributions over input token representations. Attention has empirically been demonstrated to improve performance in various tasks, while its weights have been extensively used as explanations for model predictions. Recent studies (Jain and Wallace, 2019; Serrano and Smith, 2019; Wiegreffe and Pinter, 2019) have showed that it cannot generally be considered as a faithful explanation (Jacovi and Goldberg, 2020) across encoders and tasks. In this paper, we seek to improve the faithfulness of attention-based explanations for text classification. We achieve this by proposing a new family of Task-Scaling (TaSc) mechanisms that learn task-specific non-contextualised information to scale the original attention weights. Evaluation tests for explanation faithfulness, show that the three proposed variants of TaSc improve attention-based ...
Keyword: Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
URL: https://underline.io/lecture/25394-improving-the-faithfulness-of-attention-based-explanations-with-task-specific-information-for-text-classification
https://dx.doi.org/10.48448/q32p-7d89
BASE
Hide details
10
Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience ...
BASE
Show details
11
Modeling the Severity of Complaints in Social Media ...
NAACL 2021 2021; Aletras, Nikolaos; Jin, Mali. - : Underline Science Inc., 2021
BASE
Show details
12
Active Learning by Acquiring Contrastive Examples ...
BASE
Show details
13
In Factuality: Efficient Integration of Relevant Facts for Visual Question Answering ...
BASE
Show details
14
Frustratingly Simple Pretraining Alternatives to Masked Language Modeling ...
BASE
Show details
15
Knowledge Distillation for Quality Estimation ...
BASE
Show details
16
Machine Extraction of Tax Laws from Legislative Texts
In: Proceedings of the Natural Legal Language Processing Workshop 2021 (2021)
BASE
Show details
17
Point-of-Interest Type Prediction using Text and Images ...
BASE
Show details
18
Point-of-Interest Type Prediction using Text and Images ...
BASE
Show details
19
An Empirical Study on Leveraging Position Embeddings for Target-oriented Opinion Words Extraction ...
BASE
Show details
20
Knowledge distillation for quality estimation
Gajbhiye, Amit; Fomicheva, Marina; Alva-Manchego, Fernando. - : Association for Computational Linguistics, 2021
BASE
Show details

Page: 1 2

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
28
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern