Page: 1 2 3 4 5 6 7 8 9 10
81 |
XCOPA: A multilingual dataset for causal commonsense reasoning
|
|
|
|
BASE
|
|
Show details
|
|
82 |
Improving bilingual lexicon induction with unsupervised post-processing of monolingual word vector spaces
|
|
|
|
BASE
|
|
Show details
|
|
83 |
SemEval-2020 Task 2: Predicting multilingual and cross-lingual (graded) lexical entailment
|
|
|
|
BASE
|
|
Show details
|
|
84 |
Modeling Language Variation and Universals: A Survey on Typological Linguistics for Natural Language Processing
|
|
|
|
In: ISSN: 0891-2017 ; EISSN: 1530-9312 ; Computational Linguistics ; https://hal.archives-ouvertes.fr/hal-02425462 ; Computational Linguistics, Massachusetts Institute of Technology Press (MIT Press), 2019, 45 (3), pp.559-601. ⟨10.1162/coli_a_00357⟩ ; https://www.mitpressjournals.org/doi/abs/10.1162/coli_a_00357 (2019)
|
|
BASE
|
|
Show details
|
|
85 |
Modeling Language Variation and Universals: A Survey on Typological Linguistics for Natural Language Processing ...
|
|
|
|
BASE
|
|
Show details
|
|
86 |
Show Some Love to Your n-grams: A Bit of Progress and Stronger n-gram Language Modeling Baselines ...
|
|
|
|
Abstract:
In recent years neural language models (LMs) have set state-of-the-art performance for several benchmarking datasets. While the reasons for their success and their computational demand are well-documented, a comparison between neural models and more recent developments in n-gram models is neglected. In this paper, we examine the recent progress in n-gram literature, running experiments on 50 languages covering all morphological language families. Experimental results illustrate that a simple extension of Modified Kneser-Ney outperforms an LSTM language model on 42 languages while a word-level Bayesian n-gram LM outperforms the character-aware neural model on average across all languages, and its extension which explicitly injects linguistic knowledge on 8 languages. Further experiments on larger Europarl datasets for 3 languages indicate that neural architectures are able to outperform computationally much cheaper n-gram models: n-gram training is up to 15,000 times quicker. Our experiments illustrate that ...
|
|
URL: https://www.repository.cam.ac.uk/handle/1810/292617 https://dx.doi.org/10.17863/cam.39778
|
|
BASE
|
|
Hide details
|
|
87 |
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity ...
|
|
|
|
BASE
|
|
Show details
|
|
88 |
Do We Really Need Fully Unsupervised Cross-Lingual Embeddings? ...
|
|
|
|
BASE
|
|
Show details
|
|
89 |
A neural classification method for supporting the creation of BioVerbNet ...
|
|
|
|
BASE
|
|
Show details
|
|
90 |
A neural classification method for supporting the creation of BioVerbNet ...
|
|
|
|
BASE
|
|
Show details
|
|
91 |
Investigating cross-lingual alignment methods for contextualized embeddings with Token-level evaluation ...
|
|
|
|
BASE
|
|
Show details
|
|
92 |
A neural classification method for supporting the creation of BioVerbNet ...
|
|
|
|
BASE
|
|
Show details
|
|
93 |
Second-order contexts from lexical substitutes for few-shot learning of word representations ...
|
|
|
|
BASE
|
|
Show details
|
|
94 |
A Neural Classification Method for Supporting the Creation of BioVerbNet ...
|
|
|
|
BASE
|
|
Show details
|
|
95 |
Enhancing biomedical word embeddings by retrofitting to verb clusters ...
|
|
|
|
BASE
|
|
Show details
|
|
97 |
A Neural Classification Method for Supporting the Creation of BioVerbNet
|
|
|
|
BASE
|
|
Show details
|
|
98 |
Second-order contexts from lexical substitutes for few-shot learning of word representations
|
|
|
|
BASE
|
|
Show details
|
|
99 |
Investigating cross-lingual alignment methods for contextualized embeddings with Token-level evaluation
|
|
|
|
BASE
|
|
Show details
|
|
100 |
A neural classification method for supporting the creation of BioVerbNet
|
|
|
|
BASE
|
|
Show details
|
|
Page: 1 2 3 4 5 6 7 8 9 10
|
|