Page: 1 2 3 4 5 6 7 8 9... 52
81 |
Meta Distant Transfer Learning for Pre-trained Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
83 |
Improving Span Representation for Domain-adapted Coreference Resolution ...
|
|
|
|
BASE
|
|
Show details
|
|
84 |
Temporal Adaptation of BERT and Performance on Downstream Document Classification: Insights from Social Media ...
|
|
|
|
BASE
|
|
Show details
|
|
85 |
An Empirical Study on Multiple Information Sources for Zero-Shot Fine-Grained Entity Typing ...
|
|
|
|
BASE
|
|
Show details
|
|
86 |
Looking for Confirmations: An Effective and Human-Like Visual Dialogue Strategy ...
|
|
|
|
BASE
|
|
Show details
|
|
89 |
CrossVQA: Scalably Generating Benchmarks for Systematically Testing VQA Generalization ...
|
|
|
|
BASE
|
|
Show details
|
|
90 |
Latent Hatred: A Benchmark for Understanding Implicit Hate Speech ...
|
|
|
|
BASE
|
|
Show details
|
|
92 |
STaCK: Sentence Ordering with Temporal Commonsense Knowledge ...
|
|
|
|
BASE
|
|
Show details
|
|
93 |
ExplaGraphs: An Explanation Graph Generation Task for Structured Commonsense Reasoning ...
|
|
|
|
BASE
|
|
Show details
|
|
94 |
Weakly supervised discourse segmentation for multiparty oral conversations ...
|
|
|
|
BASE
|
|
Show details
|
|
95 |
Searching for an Effective Defender: Benchmarking Defense against Adversarial Word Substitution ...
|
|
|
|
BASE
|
|
Show details
|
|
96 |
Progressively Guide to Attend: An Iterative Alignment Framework for Temporal Sentence Grounding ...
|
|
|
|
BASE
|
|
Show details
|
|
97 |
Knowledge Enhanced Fine-Tuning for Better Handling Unseen Entities in Dialogue Generation ...
|
|
|
|
Abstract:
Anthology paper link: https://aclanthology.org/2021.emnlp-main.179/ Abstract: Although pre-training models have achieved great success in dialogue generation, their performance drops dramatically when the input contains an entity that does not appear in pre-training and fine-tuning datasets (unseen entity). To address this issue, existing methods leverage an external knowledge base to generate appropriate responses. In real-world scenario, the entity may not be included by the knowledge base or suffer from the precision of knowledge retrieval. To deal with this problem, instead of introducing knowledge base as the input, we force the model to learn a better semantic representation by predicting the information in the knowledge base, only based on the input context. Specifically, with the help of a knowledge base, we introduce two auxiliary training objectives: 1) Interpret Masked Word, which conjectures the meaning of the masked entity given the context; 2) Hypernym Generation, which predicts the hypernym of ...
|
|
Keyword:
Computational Linguistics; Machine Learning; Machine Learning and Data Mining; Natural Language Processing
|
|
URL: https://underline.io/lecture/37618-knowledge-enhanced-fine-tuning-for-better-handling-unseen-entities-in-dialogue-generation https://dx.doi.org/10.48448/pq9s-y744
|
|
BASE
|
|
Hide details
|
|
98 |
STANKER: Stacking Network based on Level-grained Attention-masked BERT for Rumor Detection on Social Media ...
|
|
|
|
BASE
|
|
Show details
|
|
99 |
IndoNLG: Benchmark and Resources for Evaluating Indonesian Natural Language Generation ...
|
|
|
|
BASE
|
|
Show details
|
|
100 |
SYSML: StYlometry with Structure and Multitask Learning: Implications for Darknet Forum Migrant Analysis ...
|
|
|
|
BASE
|
|
Show details
|
|
Page: 1 2 3 4 5 6 7 8 9... 52
|
|