7 |
Towards Zero-shot Language Modeling ...
|
|
|
|
Abstract:
Can we construct a neural model that is inductively biased towards learning human languages? Motivated by this question, we aim at constructing an informative prior over neural weights, in order to adapt quickly to held-out languages in the task of character-level language modeling. We infer this distribution from a sample of typologically diverse training languages via Laplace approximation. The use of such a prior outperforms baseline models with an uninformative prior (so-called "fine-tuning") in both zero-shot and few-shot settings. This shows that the prior is imbued with universal phonological knowledge. Moreover, we harness additional language-specific side information as distant supervision for held-out languages. Specifically, we condition language models on features from typological databases, by concatenating them to hidden states or generating weights with hyper-networks. These features appear beneficial in the few-shot setting, but not in the zero-shot setting. Since the paucity of digital texts ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences
|
|
URL: https://arxiv.org/abs/2108.03334 https://dx.doi.org/10.48550/arxiv.2108.03334
|
|
BASE
|
|
Hide details
|
|
9 |
Finding Concept-specific Biases in Form--Meaning Associations ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Searching for Search Errors in Neural Morphological Inflection ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Applying the Transformer to Character-level Transduction ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Quantifying Gender Bias Towards Politicians in Cross-Lingual Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Examining the Inductive Bias of Neural Language Models with Artificial Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|