DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4
Hits 1 – 20 of 64

1
Fairlex: A multilingual benchmark for evaluating fairness in legal text processing ...
BASE
Show details
2
Fairlex: A multilingual benchmark for evaluating fairness in legal text processing ...
BASE
Show details
3
UK-LEX Dataset - Part of Chalkidis and Søgaard (2022) ...
Chalkidis, Ilias; Søgaard, Anders. - : Zenodo, 2022
BASE
Show details
4
UK-LEX Dataset - Part of Chalkidis and Søgaard (2022) ...
Chalkidis, Ilias; Søgaard, Anders. - : Zenodo, 2022
BASE
Show details
5
FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing ...
BASE
Show details
6
Generalized Quantifiers as a Source of Error in Multilingual NLU Benchmarks ...
BASE
Show details
7
Challenges and Strategies in Cross-Cultural NLP ...
BASE
Show details
8
Factual Consistency of Multilingual Pretrained Language Models ...
BASE
Show details
9
Zero-Shot Dependency Parsing with Worst-Case Aware Automated Curriculum Learning ...
BASE
Show details
10
How Conservative are Language Models? Adapting to the Introduction of Gender-Neutral Pronouns ...
BASE
Show details
11
Replicating and Extending "Because Their Treebanks Leak": Graph Isomorphism, Covariants, and Parser Performance ...
BASE
Show details
12
The Impact of Positional Encodings on Multilingual Compression ...
BASE
Show details
13
Minimax and Neyman–Pearson Meta-Learning for Outlier Languages ...
BASE
Show details
14
Evaluation of Summarization Systems across Gender, Age, and Race ...
BASE
Show details
15
Locke's Holiday: Belief Bias in Machine Reading ...
BASE
Show details
16
Dynamic Forecasting of Conversation Derailment ...
BASE
Show details
17
Replicating and Extending ``Because Their Treebanks Leak'': Graph Isomorphism, Covariants, and Parser Performance ...
BASE
Show details
18
Can Language Models Encode Perceptual Structure Without Grounding? A Case Study in Color ...
BASE
Show details
19
Spurious Correlations in Cross-Topic Argument Mining ...
Abstract: Recent work in cross-topic argument mining attempts to learn models that generalize across topics rather than merely relying on within-topic spurious correlations. We examine the effectiveness of this approach by analyzing the output of single-task and multi-task models for cross-topic argument mining through a combination of linear approximations of their decision boundaries, manual feature grouping, challenge examples, and ablations across the input vocabulary. Surprisingly, we show that cross-topic models still rely mostly on spurious correlations and only generalize within closely related topics, e.g., a model trained only on closed-class words and a few common open-class words outperforms a state-of-the-art cross-topic model on distant target topics. ...
Keyword: Computational Linguistics; Condensed Matter Physics; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Semantics
URL: https://dx.doi.org/10.48448/fbq0-ns14
https://underline.io/lecture/29782-spurious-correlations-in-cross-topic-argument-mining
BASE
Hide details
20
Minimax and Neyman–Pearson Meta-Learning for Outlier Languages ...
BASE
Show details

Page: 1 2 3 4

Catalogues
1
0
6
0
1
0
0
Bibliographies
7
0
0
0
0
0
2
0
1
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
51
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern