1 |
Automatic Speech Recognition Datasets in Cantonese: A Survey and New Dataset ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
ASCEND: A Spontaneous Chinese-English Dataset for Code-switching in Multi-turn Conversation ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Continual Mixed-Language Pre-Training for Extremely Low-Resource Neural Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
BiToD: A Bilingual Multi-Domain Dataset For Task-Oriented Dialogue Modeling ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
IndoNLG: Benchmark and Resources for Evaluating Indonesian Natural Language Generation ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Zero-Shot Dialogue State Tracking via Cross-Task Transfer ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Adapting High-resource NMT Models to Translate Low-resource Related Languages without Parallel Data ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Learning Fast Adaptation on Cross-Accented Speech Recognition ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Exploring Fine-tuning Techniques for Pre-trained Cross-lingual Models via Continual Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Meta-Transfer Learning for Code-Switched Speech Recognition ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
On the Importance of Word Order Information in Cross-lingual Sequence Labeling ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Multilingual and Interlingual Semantic Representations for Natural Language Processing: A Brief Introduction
|
|
|
|
In: Computational Linguistics, Vol 46, Iss 2, Pp 249-255 (2020) (2020)
|
|
BASE
|
|
Show details
|
|
18 |
Attention-Informed Mixed-Language Training for Zero-shot Cross-lingual Task-oriented Dialogue Systems ...
|
|
|
|
Abstract:
Recently, data-driven task-oriented dialogue systems have achieved promising performance in English. However, developing dialogue systems that support low-resource languages remains a long-standing challenge due to the absence of high-quality data. In order to circumvent the expensive and time-consuming data collection, we introduce Attention-Informed Mixed-Language Training (MLT), a novel zero-shot adaptation method for cross-lingual task-oriented dialogue systems. It leverages very few task-related parallel word pairs to generate code-switching sentences for learning the inter-lingual semantics across languages. Instead of manually selecting the word pairs, we propose to extract source words based on the scores computed by the attention layer of a trained English task-related model and then generate word pairs using existing bilingual dictionaries. Furthermore, intensive experiments with different cross-lingual embeddings demonstrate the effectiveness of our approach. Finally, with very few word pairs, our ... : Accepted as an oral presentation in AAAI 2020 ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences; Machine Learning cs.LG
|
|
URL: https://arxiv.org/abs/1911.09273 https://dx.doi.org/10.48550/arxiv.1911.09273
|
|
BASE
|
|
Hide details
|
|
19 |
Zero-shot Cross-lingual Dialogue Systems with Transferable Latent Variables ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Towards Universal End-to-End Affect Recognition from Multilingual Speech by ConvNets ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|