1 |
Self-Training Sampling with Monolingual Data Uncertainty for Neural Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Self-Training Sampling with Monolingual Data Uncertainty for Neural Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
On the Copying Behaviors of Pre-Training for Neural Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Multi-Task Learning with Shared Encoder for Non-Autoregressive Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
On the Inference Calibration of Neural Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
EmpDG: Multi-resolution Interactive Empathetic Dialogue Generation ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
On the Sparsity of Neural Machine Translation Models ...
|
|
|
|
Abstract:
Modern neural machine translation (NMT) models employ a large number of parameters, which leads to serious over-parameterization and typically causes the underutilization of computational resources. In response to this problem, we empirically investigate whether the redundant parameters can be reused to achieve better performance. Experiments and analyses are systematically conducted on different datasets and NMT architectures. We show that: 1) the pruned parameters can be rejuvenated to improve the baseline model by up to +0.8 BLEU points; 2) the rejuvenated parameters are reallocated to enhance the ability of modeling low-level lexical information. ... : EMNLP 2020 ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://dx.doi.org/10.48550/arxiv.2010.02646 https://arxiv.org/abs/2010.02646
|
|
BASE
|
|
Hide details
|
|
8 |
Assessing the Bilingual Knowledge Learned by Neural Machine Translation Models ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Understanding and Improving Lexical Choice in Non-Autoregressive Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Information Aggregation for Multi-Head Attention with Routing-by-Agreement ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Neuron Interaction Based Representation Composition for Neural Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Multi-Granularity Self-Attention for Neural Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Towards Understanding Neural Machine Translation with Word Importance ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Towards Better Modeling Hierarchical Structure for Self-Attention with Ordered Neurons ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Translating pro-drop languages with reconstruction models
|
|
|
|
In: Wang, Longyue orcid:0000-0002-9062-6183 , Tu, Zhaopeng, Shi, Shuming, Zhang, Tong, Graham, Yvette and Liu, Qun orcid:0000-0002-7000-1792 (2018) Translating pro-drop languages with reconstruction models. In: Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), 2–7 Feb 2018, New Orleans, LA, USA. ISBN 978-1-57735-800-8 (2018)
|
|
BASE
|
|
Show details
|
|
16 |
Translating pro-drop languages with reconstruction models
|
|
|
|
In: Wang, Longyue orcid:0000-0002-9062-6183 , Tu, Zhaopeng, Shi, Shuming, Zhang, Tong, Graham, Yvette and Liu, Qun orcid:0000-0002-7000-1792 (2018) Translating pro-drop languages with reconstruction models. In: 32nd AAAI Conference on Artificial Intelligence (AAAI 2018), 2 - 7 Feb 2018, New Orleans, LA, USA. ISBN 978-1-57735-800-8 (2018)
|
|
BASE
|
|
Show details
|
|
17 |
Translating Pro-Drop Languages with Reconstruction Models ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Exploiting Deep Representations for Neural Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Exploiting cross-sentence context for neural machine translation
|
|
|
|
In: Wang, Longyue orcid:0000-0002-9062-6183 , Tu, Zhaopeng, Way, Andy orcid:0000-0001-5736-5930 and Liu, Qun orcid:0000-0002-7000-1792 (2017) Exploiting cross-sentence context for neural machine translation. In: 2017 Conference on Empirical Methods in Natural Language Processing, 7-8 Sept 2017, Copenhagen, Denmark. (2017)
|
|
BASE
|
|
Show details
|
|
|
|