DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...7
Hits 1 – 20 of 130

1
WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation ...
BASE
Show details
2
Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection ...
BASE
Show details
3
Probing Across Time: What Does RoBERTa Know and When? ...
BASE
Show details
4
Specializing Multilingual Language Models: An Empirical Study ...
Chau, Ethan C.; Smith, Noah A.. - : arXiv, 2021
BASE
Show details
5
Provable Limitations of Acquiring Meaning from Ungrounded Form: What will Future Language Models Understand? ...
BASE
Show details
6
Measuring Association Between Labels and Free-Text Rationales ...
BASE
Show details
7
Promoting Graph Awareness in Linearized Graph-to-Text Generation ...
BASE
Show details
8
Challenges in Automated Debiasing for Toxic Language Detection ...
BASE
Show details
9
NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead Heuristics ...
Lu, Ximing; Welleck, Sean; West, Peter. - : arXiv, 2021
BASE
Show details
10
Effects of Parameter Norm Growth During Transformer Training: Inductive Bias from Gradient Descent ...
BASE
Show details
11
Competency Problems: On Finding and Removing Artifacts in Language Data ...
Abstract: Anthology paper link: https://aclanthology.org/2021.emnlp-main.135/ Abstract: Much recent work in NLP has documented dataset artifacts, bias, and spurious correlations between input features and output labels. However, how to tell which features have "spurious" instead of legitimate correlations is typically left unspecified. In this work we argue that for complex language understanding tasks, all simple feature correlations are spurious, and we formalize this notion into a class of problems which we call competency problems. For example, the word "amazing" on its own should not give information about a sentiment label independent of the context in which it appears, which could include negation, metaphor, sarcasm, etc. We theoretically analyze the difficulty of creating data for competency problems when human bias is taken into account, showing that realistic datasets will increasingly deviate from competency problems as dataset size increases. This analysis gives us a simple statistical test for dataset ...
Keyword: Language Models; Natural Language Processing; Semantic Evaluation; Sociolinguistics
URL: https://underline.io/lecture/37929-competency-problems-on-finding-and-removing-artifacts-in-language-data
https://dx.doi.org/10.48448/xnpn-5692
BASE
Hide details
12
Extracting and Inferring Personal Attributes from Dialogue
Wang, Zhilin. - 2021
BASE
Show details
13
Positive AI with Social Commonsense Models
Sap, Maarten. - 2021
BASE
Show details
14
Semantic Comparisons for Natural Language Processing Applications
Lin, Lucy. - 2021
BASE
Show details
15
Challenges in Automated Debiasing for Toxic Language Detection
ZHOU, XUHUI. - 2021
BASE
Show details
16
Parsing with Multilingual BERT, a Small Corpus, and a Small Treebank ...
BASE
Show details
17
The Multilingual Amazon Reviews Corpus ...
BASE
Show details
18
Unsupervised Bitext Mining and Translation via Self-trained Contextual Embeddings ...
BASE
Show details
19
Evaluating Models' Local Decision Boundaries via Contrast Sets ...
BASE
Show details
20
Grounded Compositional Outputs for Adaptive Language Modeling ...
BASE
Show details

Page: 1 2 3 4 5...7

Catalogues
0
0
4
0
0
0
0
Bibliographies
7
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
122
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern