DE eng

Search in the Catalogues and Directories

Hits 1 – 8 of 8

1
Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation ...
Abstract: Anthology paper link: https://aclanthology.org/2021.emnlp-main.132/ Abstract: We study the power of cross-attention in the Transformer architecture within the context of transfer learning for machine translation, and extend the findings of studies into cross-attention when training from scratch. We conduct a series of experiments through fine-tuning a translation model on data where either the source or target language has changed. These experiments reveal that fine-tuning only the cross-attention parameters is nearly as effective as fine-tuning all parameters (i.e., the entire translation model). We provide insights into why this is the case and observe that limiting fine-tuning in this manner yields cross-lingually aligned embeddings. The implications of this finding for researchers and practitioners include a mitigation of catastrophic forgetting, the potential for zero-shot translation, and the ability to extend machine translation models to several new language pairs with reduced parameter storage ...
Keyword: Computational Linguistics; Machine Learning; Machine Learning and Data Mining; Machine translation; Natural Language Processing
URL: https://dx.doi.org/10.48448/pgv2-cn55
https://underline.io/lecture/37970-cross-attention-is-all-you-need-adapting-pretrained-transformers-for-machine-translation
BASE
Hide details
2
RICA: Evaluating Robust Inference Capabilities Based on Commonsense Axioms ...
BASE
Show details
3
Learn Continually, Generalize Rapidly: Lifelong Knowledge Accumulation for Few-shot Learning ...
BASE
Show details
4
RockNER: A Simple Method to Create Adversarial Examples for Evaluating the Robustness of Named Entity Recognition Models ...
BASE
Show details
5
ECONET: Effective Continual Pretraining of Language Models for Event Temporal Reasoning ...
BASE
Show details
6
Discretized Integrated Gradients for Explaining Language Models ...
BASE
Show details
7
Lawyers are Dishonest? Quantifying Representational Harms in Commonsense Knowledge Resources ...
BASE
Show details
8
Extract, Denoise and Enforce: Evaluating and Improving Concept Preservation for Text-to-Text Generation ...
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
8
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern