DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5 6 7...52
Hits 41 – 60 of 1.029

41
KOAS: Korean Text Offensiveness Analysis System ...
BASE
Show details
42
Contrastive Code Representation Learning ...
BASE
Show details
43
Does Putting a Linguist in the Loop Improve NLU Data Collection ...
BASE
Show details
44
What are we learning from language? ...
BASE
Show details
45
Machine Translation Decoding beyond Beam Search ...
BASE
Show details
46
Say `YES' to Positivity: Detecting Toxic Language in Workplace Communications ...
BASE
Show details
47
Unsupervised Multi-View Post-OCR Error Correction With Language Models ...
BASE
Show details
48
AttentionRank: Unsupervised Keyphrase Extraction using Self and Cross Attentions ...
BASE
Show details
49
ProtoInfoMax: Prototypical Networks with Mutual Information Maximization for Out-of-Domain Detection ...
BASE
Show details
50
Multi-granularity Textual Adversarial Attack with Behavior Cloning ...
BASE
Show details
51
Automatic Fact-Checking with Document-level Annotations using BERT and Multiple Instance Learning ...
BASE
Show details
52
Towards the Early Detection of Child Predators in Chat Rooms: A BERT-based Approach ...
BASE
Show details
53
TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning ...
BASE
Show details
54
WebSRC: A Dataset for Web-Based Structural Reading Comprehension ...
BASE
Show details
55
Improving Math Word Problems with Pre-trained Knowledge and Hierarchical Reasoning ...
BASE
Show details
56
Semantic Categorization of Social Knowledge for Commonsense Question Answering ...
BASE
Show details
57
Adversarial Examples for Evaluating Math Word Problem Solvers ...
BASE
Show details
58
Pre-train or Annotate? Domain Adaptation with a Constrained Budget ...
BASE
Show details
59
Corpus-based Open-Domain Event Type Induction ...
BASE
Show details
60
Learning with Different Amounts of Annotation: From Zero to Many Labels ...
Abstract: Anthology paper link: https://aclanthology.org/2021.emnlp-main.601/ Abstract: Training NLP systems typically assumes access to annotated data that has a single human label per example. Given imperfect labeling from annotators and inherent ambiguity of language, we hypothesize that single label is not sufficient to learn the spectrum of language interpretation. We explore new annotation distribution schemes, assigning multiple labels per example for a small subset of training examples. Introducing such multi label examples at the cost of annotating fewer examples brings clear gains on natural language inference task and entity typing task, even when we simply first train with a single label data and then fine tune with multi label examples. Extending a MixUp data augmentation framework, we propose a learning algorithm that can learn from training examples with different amount of annotation (with zero, one, or multiple labels). This algorithm efficiently combines signals from uneven training data and brings ...
Keyword: Computational Linguistics; Machine Learning; Machine Learning and Data Mining; Natural Language Processing
URL: https://underline.io/lecture/37576-learning-with-different-amounts-of-annotation-from-zero-to-many-labels
https://dx.doi.org/10.48448/ys77-8923
BASE
Hide details

Page: 1 2 3 4 5 6 7...52

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
1.029
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern