DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 21 – 29 of 29

21
Taking MT Evaluation Metrics to Extremes: Beyond Correlation with Human Judgments
In: Computational Linguistics, Vol 45, Iss 3, Pp 515-558 (2019) (2019)
BASE
Show details
22
The Role of human reference translation in machine translation evaluation
Fomicheva, Marina. - : Universitat Pompeu Fabra, 2017
In: TDX (Tesis Doctorals en Xarxa) (2017)
BASE
Show details
23
Reference Bias in Monolingual Machine Translation Evaluation
Fomicheva, Marina [Verfasser]; Specia, Lucia [Verfasser]. - Aachen : Universitätsbibliothek der RWTH Aachen, 2016
DNB Subject Category Language
Show details
24
CobaltF: A Fluent Metric for MT Evaluation
Fomicheva, Marina [Verfasser]; Bel, Núria [Verfasser]; Specia, Lucia [Verfasser]. - Aachen : Universitätsbibliothek der RWTH Aachen, 2016
DNB Subject Category Language
Show details
25
USFD at SemEval-2016 task 1: putting different state-of-the-arts into a box
In: 609 ; 613 (2016)
BASE
Show details
26
UPF-Cobalt Submission to WMT15 Metrics Task
Fomicheva, Marina [Verfasser]; Bel, Núria [Verfasser]; Cunha, Iria da [Verfasser]. - Aachen : Universitätsbibliothek der RWTH Aachen, 2015
DNB Subject Category Language
Show details
27
Análisis del tratamiento de la terminología en la traducción automática: implicaciones para la evaluación
In: Debate Terminológico; n. 10 (2013) ; Debate Terminológico. ISSN: 1813-1867; n. 10 (2013) ; 1813-1867 (2013)
BASE
Show details
28
UPF-cobalt submission to WMT15 metrics task
Fomicheva, Marina; Bel Rafecas, Núria; da Cunha Fanego, Iria; Malinovskiy, Anton. - : ACL (Association for Computational Linguistics)
Abstract: Comunicació presentada a: 10th Workshop on Statistical Machine Translation, celebrat a Lisboa, Portugal, del 17 al 18 de setembre de 2015. ; An important limitation of automatic evaluation metrics is that, when comparing Machine Translation (MT) to a human reference, they are often unable to discriminate between acceptable variation and the differences that are indicative of MT errors. In this paper we present UPF-Cobalt evaluation system that addresses this issue by penalizing the differences in the syntactic contexts of aligned candidate and reference words. We evaluate our metric using the data from WMT workshops of the recent years and show that it performs competitively both at segment and at system levels. ; This work was supported by IULA (UPF) and the FIDGR grant program of the Generalitat de Catalunya.
Keyword: Evaluation metrics; Statistical Machine Translation
URL: http://hdl.handle.net/10230/36827
BASE
Hide details
29
CobaltF: a fluent metric for MT evaluation
da Cunha Fanego, Iria; Malinovskiy, Anton; Bel Rafecas, Núria. - : ACL (Association for Computational Linguistics)
BASE
Show details

Page: 1 2

Catalogues
0
0
0
0
3
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
26
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern