1 |
Better Neural Machine Translation by Extracting Linguistic Information from BERT ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Multi-class Multilingual Classification of Wikipedia Articles Using Extended Named Entity Tag Set ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Pointer-based Fusion of Bilingual Lexicons into Neural Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Top-down Tree Structured Decoding with Syntactic Connections for Neural Machine Translation and Parsing ...
|
|
|
|
Abstract:
The addition of syntax-aware decoding in Neural Machine Translation (NMT) systems requires an effective tree-structured neural network, a syntax-aware attention model and a language generation model that is sensitive to sentence structure. We exploit a top-down tree-structured model called DRNN (Doubly-Recurrent Neural Networks) first proposed by Alvarez-Melis and Jaakola (2017) to create an NMT model called Seq2DRNN that combines a sequential encoder with tree-structured decoding augmented with a syntax-aware attention model. Unlike previous approaches to syntax-based NMT which use dependency parsing models our method uses constituency parsing which we argue provides useful information for translation. In addition, we use the syntactic structure of the sentence to add new connections to the tree-structured decoder neural network (Seq2DRNN+SynC). We compare our NMT model with sequential and state of the art syntax-based NMT models and show that our model produces more fluent translations with better ... : Accepted as an EMNLP 2018 Long Paper ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://dx.doi.org/10.48550/arxiv.1809.01854 https://arxiv.org/abs/1809.01854
|
|
BASE
|
|
Hide details
|
|
|
|