1 |
Continual Mixed-Language Pre-Training for Extremely Low-Resource Neural Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
BiToD: A Bilingual Multi-Domain Dataset For Task-Oriented Dialogue Modeling ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Adapting High-resource NMT Models to Translate Low-resource Related Languages without Parallel Data ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Learning Fast Adaptation on Cross-Accented Speech Recognition ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Exploring Fine-tuning Techniques for Pre-trained Cross-lingual Models via Continual Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Meta-Transfer Learning for Code-Switched Speech Recognition ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
On the Importance of Word Order Information in Cross-lingual Sequence Labeling ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Attention-Informed Mixed-Language Training for Zero-shot Cross-lingual Task-oriented Dialogue Systems ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Zero-shot Cross-lingual Dialogue Systems with Transferable Latent Variables ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Towards Universal End-to-End Affect Recognition from Multilingual Speech by ConvNets ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Code-Switched Language Models Using Neural Based Synthetic Data from Parallel Sentences ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Hierarchical Meta-Embeddings for Code-Switching Named Entity Recognition ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
GlobalTrait: Personality Alignment of Multilingual Word Embeddings ...
|
|
|
|
Abstract:
We propose a multilingual model to recognize Big Five Personality traits from text data in four different languages: English, Spanish, Dutch and Italian. Our analysis shows that words having a similar semantic meaning in different languages do not necessarily correspond to the same personality traits. Therefore, we propose a personality alignment method, GlobalTrait, which has a mapping for each trait from the source language to the target language (English), such that words that correlate positively to each trait are close together in the multilingual vector space. Using these aligned embeddings for training, we can transfer personality related training features from high-resource languages such as English to other low-resource languages, and get better multilingual results, when compared to using simple monolingual and unaligned multilingual embeddings. We achieve an average F-score increase (across all three languages except English) from 65 to 73.4 (+8.4), when comparing our monolingual model to ... : Submitted and accepted to AAAI 2019 conference ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://arxiv.org/abs/1811.00240 https://dx.doi.org/10.48550/arxiv.1811.00240
|
|
BASE
|
|
Hide details
|
|
17 |
Learn to Code-Switch: Data Augmentation using Copy Mechanism on Language Modeling ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Mem2Seq: Effectively Incorporating Knowledge Bases into End-to-End Task-Oriented Dialog Systems ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Bilingual Character Representation for Efficiently Addressing Out-of-Vocabulary Words in Code-Switching Named Entity Recognition ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Code-Switching Language Modeling using Syntax-Aware Multi-Task Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|