DE eng

Search in the Catalogues and Directories

Hits 1 – 8 of 8

1
Knowledge Distillation for Quality Estimation ...
BASE
Show details
2
Knowledge Distillation for Quality Estimation ...
BASE
Show details
3
Knowledge distillation for quality estimation
Gajbhiye, Amit; Fomicheva, Marina; Alva-Manchego, Fernando. - : Association for Computational Linguistics, 2021
BASE
Show details
4
deepQuest-py: large and distilled models for quality estimation
Alva-Manchego, Fernando; Obamuyide, Abiola; Gajbhiye, Amit. - : Association for Computational Linguistics, 2021
BASE
Show details
5
deepQuest-py: large and distilled models for quality estimation
In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations ; 382 ; 389 (2021)
BASE
Show details
6
Knowledge distillation for quality estimation
In: 5091 ; 5099 (2021)
Abstract: © 2021 The Authors. Published by ACL. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: https://aclanthology.org/2021.findings-acl.452 ; Quality Estimation (QE) is the task of automatically predicting Machine Translation quality in the absence of reference translations, making it applicable in real-time settings, such as translating online social media conversations. Recent success in QE stems from the use of multilingual pre-trained representations, where very large models lead to impressive results. However, the inference time, disk and memory requirements of such models do not allow for wide usage in the real world. Models trained on distilled pre-trained representations remain prohibitively large for many usage scenarios. We instead propose to directly transfer knowledge from a strong QE teacher model to a much smaller model with a different, shallower architecture. We show that this approach, in combination with data augmentation, leads to light-weight QE models that perform competitively with distilled pre-trained representations with 8x fewer parameters.
Keyword: knowledge distillation; machine translation; quality estimation
URL: https://doi.org/10.18653/v1/2021.findings-acl.452
http://hdl.handle.net/2436/624102
BASE
Hide details
7
Bilinear Fusion of Commonsense Knowledge with Attention-Based NLI Models ...
BASE
Show details
8
Enhancing the Reasoning Capabilities of Natural Language Inference Models with Attention Mechanisms and External Knowledge
GAJBHIYE, AMIT. - 2020
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
8
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern