1 |
When do Contrastive Word Alignments Improve Many-to-many Neural Machine Translation? ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
IndicBART: A Pre-trained Model for Natural Language Generation of Indic Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Harnessing Cross-lingual Features to Improve Cognate Detection for Low-resource Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
A Comprehensive Survey of Multilingual Neural Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Softmax Tempering for Training Neural Machine Translation Models ...
|
|
|
|
Abstract:
Neural machine translation (NMT) models are typically trained using a softmax cross-entropy loss where the softmax distribution is compared against smoothed gold labels. In low-resource scenarios, NMT models tend to over-fit because the softmax distribution quickly approaches the gold label distribution. To address this issue, we propose to divide the logits by a temperature coefficient, prior to applying softmax, during training. In our experiments on 11 language pairs in the Asian Language Treebank dataset and the WMT 2019 English-to-German translation task, we observed significant improvements in translation quality by up to 3.9 BLEU points. Furthermore, softmax tempering makes the greedy search to be as good as beam search decoding in terms of translation quality, enabling 1.5 to 3.5 times speed-up. We also study the impact of softmax tempering on multilingual NMT and recurrently stacked NMT, both of which aim to reduce the NMT model size by parameter sharing thereby verifying the utility of temperature ... : The paper is about prediction smoothing for improving sequence to sequence performance. Related to but not the same as label smoothing. Work in progress. Updates with deeper analyses and comparisons to related methods to follow. Rejected from EMNLP 2020 ...
|
|
Keyword:
Artificial Intelligence cs.AI; Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://arxiv.org/abs/2009.09372 https://dx.doi.org/10.48550/arxiv.2009.09372
|
|
BASE
|
|
Hide details
|
|
8 |
Harnessing Cross-lingual Features to Improve Cognate Detection for Low-resource Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
JASS: Japanese-specific Sequence to Sequence Pre-training for Neural Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Exploiting Out-of-Domain Parallel Data through Multilingual Transfer Learning for Low-Resource Neural Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
MMCR4NLP: Multilingual Multiway Corpora Repository for Natural Language Processing ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Enabling Multi-Source Neural Machine Translation By Concatenating Source Sentences In Multiple Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|