DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5 6 7 8...52
Hits 61 – 80 of 1.029

61
Extracting Event Temporal Relations via Hyperbolic Geometry ...
BASE
Show details
62
FastIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging ...
BASE
Show details
63
Contrastive Explanations for Model Interpretability ...
BASE
Show details
64
Open Aspect Target Sentiment Classification with Natural Language Prompts ...
BASE
Show details
65
Stepmothers are mean and academics are pretentious: What do pretrained language models learn about you? ...
BASE
Show details
66
We've had this conversation before: A Novel Approach to Measuring Dialog Similarity ...
BASE
Show details
67
ESTER: A Machine Reading Comprehension Dataset for Reasoning about Event Semantic Relations ...
BASE
Show details
68
Visual Goal-Step Inference using wikiHow ...
BASE
Show details
69
CLIFF: Contrastive Learning for Improving Faithfulness and Factuality in Abstractive Summarization ...
BASE
Show details
70
Truth-Conditional Captions for Time Series Data ...
BASE
Show details
71
Knowledge Base Completion Meets Transfer Learning ...
BASE
Show details
72
Partially Supervised Named Entity Recognition via the Expected Entity Ratio Loss ...
BASE
Show details
73
Honey or Poison? Solving the Trigger Curse in Few-shot Event Detection via Causal Intervention ...
BASE
Show details
74
Analyzing the Surprising Variability in Word Embedding Stability Across Languages ...
BASE
Show details
75
Neural Machine Translation with Heterogeneous Topic Knowledge Embeddings ...
BASE
Show details
76
Towards Zero-Shot Knowledge Distillation for Natural Language Processing ...
BASE
Show details
77
Corrected CBOW Performs as well as Skip-gram ...
Abstract: Mikolov et al. (2013a) observed that continuous bag-of-words (CBOW) word embeddings tend to underperform Skip-gram (SG) embeddings, and this finding has been reported in subsequent works. We find that these observations are driven not by fundamental differences in their training objectives, but more likely on faulty negative sampling CBOW implementations in popular libraries such as the official implementation, word2vec.c, and Gensim. We show that after correcting a bug in the CBOW gradient update, one can learn CBOW word embeddings that are fully competitive with SG on various intrinsic and extrinsic tasks, while being many times faster to train. ...
Keyword: Computational Linguistics; Information Extraction; Machine Learning; Machine Learning and Data Mining; Natural Language Processing
URL: https://dx.doi.org/10.48448/x9a3-s674
https://underline.io/lecture/39446-corrected-cbow-performs-as-well-as-skip-gram
BASE
Hide details
78
SIMMC 2.0: A Task-oriented Dialog Dataset for Immersive Multimodal Conversations ...
BASE
Show details
79
Automatic Text Evaluation through the Lens of Wasserstein Barycenters ...
BASE
Show details
80
Combining sentence and table evidence to predict veracity of factual claims using TaPaS and RoBERTa ...
BASE
Show details

Page: 1 2 3 4 5 6 7 8...52

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
1.029
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern