DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 28

1
How does the pre-training objective affect what large language models learn about linguistic properties? ...
BASE
Show details
2
Automatic Identification and Classification of Bragging in Social Media ...
BASE
Show details
3
Analyzing Online Political Advertisements ...
BASE
Show details
4
Modeling the Severity of Complaints in Social Media ...
Jin, Mali; Aletras, Nikolaos. - : arXiv, 2021
BASE
Show details
5
Translation Error Detection as Rationale Extraction ...
BASE
Show details
6
Knowledge Distillation for Quality Estimation ...
BASE
Show details
7
Frustratingly Simple Pretraining Alternatives to Masked Language Modeling ...
BASE
Show details
8
Analyzing Online Political Advertisements ...
BASE
Show details
9
Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification ...
BASE
Show details
10
Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience ...
Abstract: Anthology paper link: https://aclanthology.org/2021.emnlp-main.645/ Abstract: Pretrained transformer-based models such as BERT have demonstrated state-of-the-art predictive performance when adapted into a range of natural language processing tasks. An open problem is how to improve the faithfulness of explanations (rationales) for the predictions of these models. In this paper, we hypothesize that salient information extracted a priori from the training data can complement the task-specific information learned by the model during fine-tuning on a downstream task. In this way, we aim to help BERT not to forget assigning importance to informative input tokens when making predictions by proposing SALOSS; an auxiliary loss function for guiding the multi-head attention mechanism during training to be close to salient information extracted a priori using TextRank. Experiments for explanation faithfulness across five datasets, show that models trained with SALOSS consistently provide more faithful explanations ...
Keyword: Computational Linguistics; Language Models; Machine Learning; Machine Learning and Data Mining; Natural Language Processing
URL: https://dx.doi.org/10.48448/whp5-c156
https://underline.io/lecture/37385-enjoy-the-salience-towards-better-transformer-based-faithful-explanations-with-word-salience
BASE
Hide details
11
Modeling the Severity of Complaints in Social Media ...
NAACL 2021 2021; Aletras, Nikolaos; Jin, Mali. - : Underline Science Inc., 2021
BASE
Show details
12
Active Learning by Acquiring Contrastive Examples ...
BASE
Show details
13
In Factuality: Efficient Integration of Relevant Facts for Visual Question Answering ...
BASE
Show details
14
Frustratingly Simple Pretraining Alternatives to Masked Language Modeling ...
BASE
Show details
15
Knowledge Distillation for Quality Estimation ...
BASE
Show details
16
Machine Extraction of Tax Laws from Legislative Texts
In: Proceedings of the Natural Legal Language Processing Workshop 2021 (2021)
BASE
Show details
17
Point-of-Interest Type Prediction using Text and Images ...
BASE
Show details
18
Point-of-Interest Type Prediction using Text and Images ...
BASE
Show details
19
An Empirical Study on Leveraging Position Embeddings for Target-oriented Opinion Words Extraction ...
BASE
Show details
20
Knowledge distillation for quality estimation
Gajbhiye, Amit; Fomicheva, Marina; Alva-Manchego, Fernando. - : Association for Computational Linguistics, 2021
BASE
Show details

Page: 1 2

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
28
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern