Page: 1 2 3 4 5 6 7 8... 52
61 |
Extracting Event Temporal Relations via Hyperbolic Geometry ...
|
|
|
|
BASE
|
|
Show details
|
|
62 |
FastIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging ...
|
|
|
|
BASE
|
|
Show details
|
|
64 |
Open Aspect Target Sentiment Classification with Natural Language Prompts ...
|
|
|
|
BASE
|
|
Show details
|
|
65 |
Stepmothers are mean and academics are pretentious: What do pretrained language models learn about you? ...
|
|
|
|
BASE
|
|
Show details
|
|
66 |
We've had this conversation before: A Novel Approach to Measuring Dialog Similarity ...
|
|
|
|
BASE
|
|
Show details
|
|
67 |
ESTER: A Machine Reading Comprehension Dataset for Reasoning about Event Semantic Relations ...
|
|
|
|
BASE
|
|
Show details
|
|
69 |
CLIFF: Contrastive Learning for Improving Faithfulness and Factuality in Abstractive Summarization ...
|
|
|
|
BASE
|
|
Show details
|
|
70 |
Truth-Conditional Captions for Time Series Data ...
|
|
|
|
Abstract:
Anthology paper link: https://aclanthology.org/2021.emnlp-main.55/ Abstract: In this paper, we explore the task of automatically generating natural language descriptions of salient patterns in a time series, such as the stock prices of a company over a week. A model for this task should be able to extract high-level patterns such as the presence of a peak or a dip. While typical contemporary neural models with attention mechanisms can generate fluent output descriptions for this task, they often generate factually incorrect descriptions. We propose a computational model with a truth-conditional architecture which first runs small learned programs on the input time series, then identifies the programs/patterns which hold true for the given input, and finally conditions on \emph{only} the chosen valid program (rather than the input time series) to generate the output text description. A program in our model is constructed from modules, which are small neural networks that are designed to capture numerical ...
|
|
Keyword:
Computational Linguistics; Machine Learning; Machine Learning and Data Mining; Natural language generation; Natural Language Processing
|
|
URL: https://dx.doi.org/10.48448/zpvw-qs56 https://underline.io/lecture/37906-truth-conditional-captions-for-time-series-data
|
|
BASE
|
|
Hide details
|
|
72 |
Partially Supervised Named Entity Recognition via the Expected Entity Ratio Loss ...
|
|
|
|
BASE
|
|
Show details
|
|
73 |
Honey or Poison? Solving the Trigger Curse in Few-shot Event Detection via Causal Intervention ...
|
|
|
|
BASE
|
|
Show details
|
|
74 |
Analyzing the Surprising Variability in Word Embedding Stability Across Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
75 |
Neural Machine Translation with Heterogeneous Topic Knowledge Embeddings ...
|
|
|
|
BASE
|
|
Show details
|
|
76 |
Towards Zero-Shot Knowledge Distillation for Natural Language Processing ...
|
|
|
|
BASE
|
|
Show details
|
|
78 |
SIMMC 2.0: A Task-oriented Dialog Dataset for Immersive Multimodal Conversations ...
|
|
|
|
BASE
|
|
Show details
|
|
79 |
Automatic Text Evaluation through the Lens of Wasserstein Barycenters ...
|
|
|
|
BASE
|
|
Show details
|
|
80 |
Combining sentence and table evidence to predict veracity of factual claims using TaPaS and RoBERTa ...
|
|
|
|
BASE
|
|
Show details
|
|
Page: 1 2 3 4 5 6 7 8... 52
|
|