DE eng

Search in the Catalogues and Directories

Hits 1 – 14 of 14

1
AraBART: a Pretrained Arabic Sequence-to-Sequence Model for Abstractive Summarization ...
BASE
Show details
2
NADI 2021: The Second Nuanced Arabic Dialect Identification Shared Task ...
BASE
Show details
3
The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models ...
BASE
Show details
4
Morphosyntactic Tagging with Pre-trained Language Models for Arabic and its Dialects ...
BASE
Show details
5
NADI 2020: The First Nuanced Arabic Dialect Identification Shared Task ...
BASE
Show details
6
A Panoramic Survey of Natural Language Processing in the Arab World ...
BASE
Show details
7
Adversarial Multitask Learning for Joint Multi-Feature and Multi-Dialect Morphological Modeling ...
Zalmout, Nasser; Habash, Nizar. - : arXiv, 2019
BASE
Show details
8
Joint Diacritization, Lemmatization, Normalization, and Fine-Grained Morphological Tagging ...
Zalmout, Nasser; Habash, Nizar. - : arXiv, 2019
BASE
Show details
9
MADARi: A Web Interface for Joint Arabic Morphological Annotation and Spelling Correction ...
BASE
Show details
10
Utilizing Character and Word Embeddings for Text Normalization with Sequence-to-Sequence Models ...
BASE
Show details
11
Low Resourced Machine Translation via Morpho-syntactic Modeling: The Case of Dialectal Arabic ...
Abstract: We present the second ever evaluated Arabic dialect-to-dialect machine translation effort, and the first to leverage external resources beyond a small parallel corpus. The subject has not previously received serious attention due to lack of naturally occurring parallel data; yet its importance is evidenced by dialectal Arabic's wide usage and breadth of inter-dialect variation, comparable to that of Romance languages. Our results suggest that modeling morphology and syntax significantly improves dialect-to-dialect translation, though optimizing such data-sparse models requires consideration of the linguistic differences between dialects and the nature of available data and resources. On a single-reference blind test set where untranslated input scores 6.5 BLEU and a model trained only on parallel data reaches 14.6, pivot techniques and morphosyntactic modeling significantly improve performance to 17.5. ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://arxiv.org/abs/1712.06273
https://dx.doi.org/10.48550/arxiv.1712.06273
BASE
Hide details
12
Egyptian Arabic to English Statistical Machine Translation System for NIST OpenMT'2015 ...
BASE
Show details
13
A Large Scale Corpus of Gulf Arabic ...
BASE
Show details
14
LDC Arabic Treebanks and Associated Corpora: Data Divisions Manual ...
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
14
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern