DE eng

Search in the Catalogues and Directories

Hits 1 – 12 of 12

1
AUTOLEX: An Automatic Framework for Linguistic Exploration ...
BASE
Show details
2
Evaluating the Morphosyntactic Well-formedness of Generated Texts ...
BASE
Show details
3
Evaluating the Morphosyntactic Well-formedness of Generated Texts ...
BASE
Show details
4
Do Context-Aware Translation Models Pay the Right Attention? ...
BASE
Show details
5
When is Wall a Pared and when a Muro? -- Extracting Rules Governing Lexical Selection ...
BASE
Show details
6
When is Wall a Pared and when a Muro?: Extracting Rules Governing Lexical Selection ...
BASE
Show details
7
Do Context-Aware Translation Models Pay the Right Attention? ...
BASE
Show details
8
DICT-MLM: Improved Multilingual Pre-Training using Bilingual Dictionaries ...
BASE
Show details
9
SIGTYP 2020 Shared Task: Prediction of Typological Features ...
BASE
Show details
10
Automatic Extraction of Rules Governing Morphological Agreement ...
BASE
Show details
11
A Summary of the First Workshop on Language Technology for Language Documentation and Revitalization ...
BASE
Show details
12
Adapting Word Embeddings to New Languages with Morphological and Phonological Subword Representations ...
Abstract: Much work in Natural Language Processing (NLP) has been for resource-rich languages, making generalization to new, less-resourced languages challenging. We present two approaches for improving generalization to low-resourced languages by adapting continuous word representations using linguistically motivated subword units: phonemes, morphemes and graphemes. Our method requires neither parallel corpora nor bilingual dictionaries and provides a significant gain in performance over previous methods relying on these resources. We demonstrate the effectiveness of our approaches on Named Entity Recognition for four languages, namely Uyghur, Turkish, Bengali and Hindi, of which Uyghur and Bengali are low resource languages, and also perform experiments on Machine Translation. Exploiting subwords with transfer learning gives us a boost of +15.2 NER F1 for Uyghur and +9.7 F1 for Bengali. We also show improvements in the monolingual setting where we achieve (avg.) +3 F1 and (avg.) +1.35 BLEU. ... : Accepted at EMNLP 2018 ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://arxiv.org/abs/1808.09500
https://dx.doi.org/10.48550/arxiv.1808.09500
BASE
Hide details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
12
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern