DE eng

Search in the Catalogues and Directories

Hits 1 – 8 of 8

1
Similarity Analysis of Contextual Word Representation Models ...
BASE
Show details
2
On the Linguistic Representational Power of Neural Machine Translation Models
In: Computational Linguistics, Vol 46, Iss 1, Pp 1-52 (2020) (2020)
Abstract: Despite the recent success of deep neural networks in natural language processing and other spheres of artificial intelligence, their interpretability remains a challenge. We analyze the representations learned by neural machine translation (NMT) models at various levels of granularity and evaluate their quality through relevant extrinsic properties. In particular, we seek answers to the following questions: (i) How accurately is word structure captured within the learned representations, which is an important aspect in translating morphologically rich languages? (ii) Do the representations capture long-range dependencies, and effectively handle syntactically divergent languages? (iii) Do the representations capture lexical semantics? We conduct a thorough investigation along several parameters: (i) Which layers in the architecture capture each of these linguistic phenomena; (ii) How does the choice of translation unit (word, character, or subword unit) impact the linguistic properties captured by the underlying representations? (iii) Do the encoder and decoder learn differently and independently? (iv) Do the representations learned by multilingual NMT models capture the same amount of linguistic information as their bilingual counterparts? Our data-driven, quantitative evaluation illuminates important aspects in NMT models and their ability to capture various linguistic phenomena. We show that deep NMT models trained in an end-to-end fashion, without being provided any direct supervision during the training process, learn a non-trivial amount of linguistic information. Notable findings include the following observations: (i) Word morphology and part-of-speech information are captured at the lower layers of the model; (ii) In contrast, lexical semantics or non-local syntactic and semantic dependencies are better represented at the higher layers of the model; (iii) Representations learned using characters are more informed about word-morphology compared to those learned using subword units; and (iv) Representations learned by multilingual models are richer compared to bilingual models.
Keyword: Computational linguistics. Natural language processing; P98-98.5
URL: https://doi.org/10.1162/coli_a_00367
https://doaj.org/article/0f4a3f344db6432ba02ec4d3a127e34d
BASE
Hide details
3
On the Linguistic Representational Power of Neural Machine Translation Models ...
BASE
Show details
4
Improving Neural Language Models by Segmenting, Attending, and Predicting the Future ...
BASE
Show details
5
Analysis Methods in Neural Language Processing: A Survey
In: Transactions of the Association for Computational Linguistics, Vol 7, Pp 49-72 (2019) (2019)
BASE
Show details
6
What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models ...
BASE
Show details
7
Identifying and Controlling Important Neurons in Neural Machine Translation ...
BASE
Show details
8
A Character-level Convolutional Neural Network for Distinguishing Similar Languages and Dialects ...
Belinkov, Yonatan; Glass, James. - : arXiv, 2016
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
8
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern