DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 33

1
Continual Mixed-Language Pre-Training for Extremely Low-Resource Neural Machine Translation ...
BASE
Show details
2
Language Models are Few-shot Multilingual Learners ...
BASE
Show details
3
BiToD: A Bilingual Multi-Domain Dataset For Task-Oriented Dialogue Modeling ...
BASE
Show details
4
Are Multilingual Models Effective in Code-Switching? ...
BASE
Show details
5
Adapting High-resource NMT Models to Translate Low-resource Related Languages without Parallel Data ...
BASE
Show details
6
Learning Fast Adaptation on Cross-Accented Speech Recognition ...
BASE
Show details
7
Exploring Fine-tuning Techniques for Pre-trained Cross-lingual Models via Continual Learning ...
BASE
Show details
8
XPersona: Evaluating Multilingual Personalized Chatbot ...
BASE
Show details
9
Meta-Transfer Learning for Code-Switched Speech Recognition ...
BASE
Show details
10
On the Importance of Word Order Information in Cross-lingual Sequence Labeling ...
BASE
Show details
11
Attention-Informed Mixed-Language Training for Zero-shot Cross-lingual Task-oriented Dialogue Systems ...
BASE
Show details
12
Zero-shot Cross-lingual Dialogue Systems with Transferable Latent Variables ...
Liu, Zihan; Shin, Jamin; Xu, Yan. - : arXiv, 2019
BASE
Show details
13
Towards Universal End-to-End Affect Recognition from Multilingual Speech by ConvNets ...
BASE
Show details
14
Code-Switched Language Models Using Neural Based Synthetic Data from Parallel Sentences ...
BASE
Show details
15
Hierarchical Meta-Embeddings for Code-Switching Named Entity Recognition ...
Abstract: In countries that speak multiple main languages, mixing up different languages within a conversation is commonly called code-switching. Previous works addressing this challenge mainly focused on word-level aspects such as word embeddings. However, in many cases, languages share common subwords, especially for closely related languages, but also for languages that are seemingly irrelevant. Therefore, we propose Hierarchical Meta-Embeddings (HME) that learn to combine multiple monolingual word-level and subword-level embeddings to create language-agnostic lexical representations. On the task of Named Entity Recognition for English-Spanish code-switching data, our model achieves the state-of-the-art performance in the multilingual settings. We also show that, in cross-lingual settings, our model not only leverages closely related languages, but also learns from languages with different roots. Finally, we show that combining different subunits are crucial for capturing code-switching entities. ... : Accepted by EMNLP 2019 ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://arxiv.org/abs/1909.08504
https://dx.doi.org/10.48550/arxiv.1909.08504
BASE
Hide details
16
GlobalTrait: Personality Alignment of Multilingual Word Embeddings ...
BASE
Show details
17
Learn to Code-Switch: Data Augmentation using Copy Mechanism on Language Modeling ...
BASE
Show details
18
Mem2Seq: Effectively Incorporating Knowledge Bases into End-to-End Task-Oriented Dialog Systems ...
BASE
Show details
19
Bilingual Character Representation for Efficiently Addressing Out-of-Vocabulary Words in Code-Switching Named Entity Recognition ...
BASE
Show details
20
Code-Switching Language Modeling using Syntax-Aware Multi-Task Learning ...
BASE
Show details

Page: 1 2

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
33
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern