DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 22

1
Estimating the Entropy of Linguistic Distributions ...
BASE
Show details
2
On Homophony and Rényi Entropy ...
BASE
Show details
3
On Homophony and Rényi Entropy ...
BASE
Show details
4
On Homophony and Rényi Entropy ...
BASE
Show details
5
Searching for Search Errors in Neural Morphological Inflection ...
BASE
Show details
6
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
7
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
8
Conditional Poisson Stochastic Beams ...
BASE
Show details
9
Language Model Evaluation Beyond Perplexity ...
BASE
Show details
10
A surprisal--duration trade-off across and within the world's languages ...
BASE
Show details
11
Determinantal Beam Search ...
BASE
Show details
12
Is Sparse Attention more Interpretable? ...
BASE
Show details
13
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
14
A Plug-and-Play Method for Controlled Text Generation ...
Abstract: Large pre-trained language models have repeatedly shown their ability to produce fluent text. Yet even when starting from a prompt, generation can continue in many plausible directions. Current decoding methods with the goal of controlling generation, e.g., to ensure specific words are included, either require additional models or fine-tuning, or work poorly when the task at hand is semantically unconstrained, e.g., story generation. In this work, we present a plug-and-play decoding method for controlled language generation that is so simple and intuitive, it can be described in a single sentence: given a topic or keyword, we add a shift to the probability distribution over our vocabulary towards semantically similar words. We show how annealing this distribution can be used to impose hard constraints on language generation, something no other plug-and-play method is currently able to do with SOTA language generators. Despite the simplicity of this approach, we see it works incredibly well in practice: ... : Findings of the Association for Computational Linguistics: EMNLP 2021 ...
URL: http://hdl.handle.net/20.500.11850/518988
https://dx.doi.org/10.3929/ethz-b-000518988
BASE
Hide details
15
Language Model Evaluation Beyond Perplexity ...
Meister, Clara Isabel; Cotterell, Ryan. - : ETH Zurich, 2021
BASE
Show details
16
Determinantal Beam Search ...
BASE
Show details
17
Is Sparse Attention more Interpretable? ...
BASE
Show details
18
A Cognitive Regularizer for Language Modeling ...
BASE
Show details
19
A Cognitive Regularizer for Language Modeling ...
BASE
Show details
20
A Cognitive Regularizer for Language Modeling ...
BASE
Show details

Page: 1 2

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
22
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern