DE eng

Search in the Catalogues and Directories

Hits 1 – 4 of 4

1
Language Model Evaluation Beyond Perplexity ...
BASE
Show details
2
Determinantal Beam Search ...
BASE
Show details
3
Is Sparse Attention more Interpretable? ...
BASE
Show details
4
A Cognitive Regularizer for Language Modeling ...
Abstract: Read paper: https://www.aclanthology.org/2021.acl-long.404 Abstract: The uniform information density (UID) hypothesis, which posits that speakers behaving optimally tend to distribute information uniformly across a linguistic signal, has gained traction in psycholinguistics as an explanation for certain syntactic, morphological, and prosodic choices. In this work, we explore whether the UID hypothesis can be operationalized as an inductive bias for statistical language modeling. Specifically, we augment the canonical MLE objective for training language models with a regularizer that encodes UID. In experiments on ten languages spanning five language families, we find that using UID regularization consistently improves perplexity in language models, having a larger effect when training data is limited. Moreover, via an analysis of generated sequences, we find that UID-regularized language models have other desirable properties, e.g., they generate text that is more lexically diverse. Our results not only ...
Keyword: Cognitive Linguistics; Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
URL: https://underline.io/lecture/25822-a-cognitive-regularizer-for-language-modeling
https://dx.doi.org/10.48448/y299-yz80
BASE
Hide details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
4
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern