DE eng

Search in the Catalogues and Directories

Hits 1 – 13 of 13

1
Between words and characters: A Brief History of Open-Vocabulary Modeling and Tokenization in NLP
In: https://hal.inria.fr/hal-03540069 ; 2022 (2022)
BASE
Show details
2
SIGMORPHON 2020 Shared Task 0: Typologically Diverse Morphological Inflection ...
BASE
Show details
3
SIGTYP 2020 Shared Task: Prediction of Typological Features ...
BASE
Show details
4
It’s Easier to Translate out of English than into it: Measuring Neural Translation Difficulty by Cross-Mutual Information ...
BASE
Show details
5
Linguistic calibration through metacognition: aligning dialogue agent responses with expected correctness ...
BASE
Show details
6
Processing South Asian Languages Written in the Latin Script: the Dakshina Dataset ...
BASE
Show details
7
It’s Easier to Translate out of English than into it: Measuring Neural Translation Difficulty by Cross-Mutual Information
In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (2020)
BASE
Show details
8
UniMorph 3.0: Universal Morphology
In: Proceedings of the 12th Language Resources and Evaluation Conference (2020)
BASE
Show details
9
UniMorph 3.0: Universal Morphology ...
BASE
Show details
10
The SIGMORPHON 2019 Shared Task: Morphological Analysis in Context and Cross-Lingual Transfer for Inflection ...
BASE
Show details
11
Spell Once, Summon Anywhere: A Two-Level Open-Vocabulary Language Model ...
Mielke, Sabrina J.; Eisner, Jason. - : arXiv, 2018
Abstract: We show how the spellings of known words can help us deal with unknown words in open-vocabulary NLP tasks. The method we propose can be used to extend any closed-vocabulary generative model, but in this paper we specifically consider the case of neural language modeling. Our Bayesian generative story combines a standard RNN language model (generating the word tokens in each sentence) with an RNN-based spelling model (generating the letters in each word type). These two RNNs respectively capture sentence structure and word structure, and are kept separate as in linguistics. By invoking the second RNN to generate spellings for novel words in context, we obtain an open-vocabulary language model. For known words, embeddings are naturally inferred by combining evidence from type spelling and token context. Comparing to baselines (including a novel strong baseline), we beat previous work and establish state-of-the-art results on multiple datasets. ... : Accepted for publication at AAAI 2019 ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences
URL: https://arxiv.org/abs/1804.08205
https://dx.doi.org/10.48550/arxiv.1804.08205
BASE
Hide details
12
Are All Languages Equally Hard to Language-Model? ...
BASE
Show details
13
Unsupervised Disambiguation of Syncretism in Inflected Lexicons ...
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
13
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern