DE eng

Search in the Catalogues and Directories

Hits 1 – 4 of 4

1
Differentiable Subset Pruning of Transformer Heads ...
BASE
Show details
2
Vision Matters When It Should: Sanity Checking Multimodal Machine Translation Models ...
BASE
Show details
3
Differentiable subset pruning of transformer heads ...
Li, Jiaoda; Cotterell, Ryan; Sachan, Mrinmaya. - : ETH Zurich, 2021
Abstract: Multi-head attention, a collection of several attention mechanisms that independently attend to different parts of the input, is the key ingredient in the Transformer. Recent work has shown, however, that a large proportion of the heads in a Transformer's multi-head attention mechanism can be safely pruned away without significantly harming the performance of the model; such pruning leads to models that are noticeably smaller and faster in practice. Our work introduces a new head pruning technique that we term differentiable subset pruning. Intuitively, our method learns per-head importance variables and then enforces a user-specified hard constraint on the number of unpruned heads. The importance variables are learned via stochastic gradient descent. We conduct experiments on natural language inference and machine translation; we show that differentiable subset pruning performs comparably or better than previous works while offering precise control of the sparsity level. ... : Transactions of the Association for Computational Linguistics, 9 ...
URL: http://hdl.handle.net/20.500.11850/528141
https://dx.doi.org/10.3929/ethz-b-000528141
BASE
Hide details
4
Differentiable subset pruning of transformer heads
In: Transactions of the Association for Computational Linguistics, 9 (2021)
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
4
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern