DE eng

Search in the Catalogues and Directories

Hits 1 – 11 of 11

1
The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics
In: Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021) ; https://hal.archives-ouvertes.fr/hal-03466171 ; Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021), Aug 2021, Online, France. pp.96-120, ⟨10.18653/v1/2021.gem-1.10⟩ (2021)
BASE
Show details
2
The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics ...
BASE
Show details
3
Towards Syntax-Aware DialogueSummarization using Multi-task Learning ...
BASE
Show details
4
Who speaks like a style of Vitamin: Towards Syntax-Aware DialogueSummarization using Multi-task Learning ...
BASE
Show details
5
Natural language processing methods are sensitive to sub-clinical linguistic differences in schizophrenia spectrum disorders
In: NPJ Schizophr (2021)
BASE
Show details
6
Measuring the `I don't know' Problem through the Lens of Gricean Quantity ...
Khayrallah, Huda; Sedoc, João. - : arXiv, 2020
BASE
Show details
7
SMRT Chatbots: Improving Non-Task-Oriented Dialog with Simulated Multiple Reference Training ...
Khayrallah, Huda; Sedoc, João. - : arXiv, 2020
BASE
Show details
8
Complexity-Weighted Loss and Diverse Reranking for Sentence Simplification ...
Abstract: Sentence simplification is the task of rewriting texts so they are easier to understand. Recent research has applied sequence-to-sequence (Seq2Seq) models to this task, focusing largely on training-time improvements via reinforcement learning and memory augmentation. One of the main problems with applying generic Seq2Seq models for simplification is that these models tend to copy directly from the original sentence, resulting in outputs that are relatively long and complex. We aim to alleviate this issue through the use of two main techniques. First, we incorporate content word complexities, as predicted with a leveled word complexity model, into our loss function during training. Second, we generate a large set of diverse candidate simplifications at test time, and rerank these to promote fluency, adequacy, and simplicity. Here, we measure simplicity through a novel sentence complexity model. These extensions allow our models to perform competitively with state-of-the-art systems while generating simpler ... : 11 pages, North American Association of Computational Linguistics (NAACL 2019) ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://dx.doi.org/10.48550/arxiv.1904.02767
https://arxiv.org/abs/1904.02767
BASE
Hide details
9
Comparison of Diverse Decoding Methods from Conditional Language Models ...
BASE
Show details
10
Learning Word Ratings for Empathy and Distress from Document-Level User Responses ...
BASE
Show details
11
Unsupervised Post-processing of Word Vectors via Conceptor Negation ...
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
11
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern