1 |
Slangvolution: A Causal Analysis of Semantic Change and Frequency Dynamics in Slang ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Bird’s Eye: Probing for Linguistic Graph Structures with a Simple Information-Theoretic Approach ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Bird's Eye: Probing for Linguistic Graph Structures with a Simple Information-Theoretic Approach ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Bird’s Eye: Probing for Linguistic Graph Structures with a Simple Information-Theoretic Approach ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Bird’s Eye: Probing for Linguistic Graph Structures with a Simple Information-Theoretic Approach ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
How Good Is NLP? A Sober Look at NLP Tasks through the Lens of Social Impact ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
How Good Is NLP?A Sober Look at NLP Tasks through the Lens of Social Impact ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
“Let Your Characters Tell Their Story”: A Dataset for Character-Centric Narrative Understanding ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Efficient Text-based Reinforcement Learning by Jointly Leveraging State and Commonsense Graph Representations ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Efficient Text-based Reinforcement Learning by Jointly Leveraging State and Commonsense Graph Representations ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Causal Direction of Data Collection Matters: Implications of Causal and Anticausal Learning for NLP
|
|
|
|
In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021)
|
|
BASE
|
|
Show details
|
|
16 |
How Good Is NLP?A Sober Look at NLP Tasks through the Lens of Social Impact
|
|
|
|
In: Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (2021)
|
|
BASE
|
|
Show details
|
|
17 |
Differentiable subset pruning of transformer heads
|
|
|
|
In: Transactions of the Association for Computational Linguistics, 9 (2021)
|
|
Abstract:
Multi-head attention, a collection of several attention mechanisms that independently attend to different parts of the input, is the key ingredient in the Transformer. Recent work has shown, however, that a large proportion of the heads in a Transformer's multi-head attention mechanism can be safely pruned away without significantly harming the performance of the model; such pruning leads to models that are noticeably smaller and faster in practice. Our work introduces a new head pruning technique that we term differentiable subset pruning. Intuitively, our method learns per-head importance variables and then enforces a user-specified hard constraint on the number of unpruned heads. The importance variables are learned via stochastic gradient descent. We conduct experiments on natural language inference and machine translation; we show that differentiable subset pruning performs comparably or better than previous works while offering precise control of the sparsity level. ; ISSN:2307-387X
|
|
URL: https://doi.org/10.3929/ethz-b-000528141 https://hdl.handle.net/20.500.11850/528141
|
|
BASE
|
|
Hide details
|
|
18 |
Scaling Within Document Coreference to Long Texts
|
|
|
|
In: Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (2021)
|
|
BASE
|
|
Show details
|
|
19 |
“Let Your Characters Tell Their Story”: A Dataset for Character-Centric Narrative Understanding
|
|
|
|
In: Findings of the Association for Computational Linguistics: EMNLP 2021 (2021)
|
|
BASE
|
|
Show details
|
|
20 |
Efficient Text-based Reinforcement Learning by Jointly Leveraging State and Commonsense Graph Representations
|
|
|
|
In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (2021)
|
|
BASE
|
|
Show details
|
|
|
|