1 |
Ara-Women-Hate: The first Arabic Hate Speech corpus regarding Women ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Towards the Early Detection of Child Predators in Chat Rooms: A BERT-based Approach ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
STaCK: Sentence Ordering with Temporal Commonsense Knowledge ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Searching for an Effective Defender: Benchmarking Defense against Adversarial Word Substitution ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Graphine: A Dataset for Graph-aware Terminology Definition Generation ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
End-to-end style-conditioned poetry generation: What does it take to learn from examples alone? ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
To what extent do human explanations of model behavior align with actual model behavior? ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Time-aware Graph Neural Network for Entity Alignment between Temporal Knowledge Graphs ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
What’s Hidden in a One-layer Randomly Weighted Transformer? ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Sometimes We Want Ungrammatical Translations ...
|
|
|
|
Abstract:
Rapid progress in Neural Machine Translation (NMT) systems over the last few years has focused primarily on improving translation quality, and as a secondary focus, improving robustness to perturbations (e.g. spelling). While performance and robustness are important objectives, by over-focusing on these, we risk overlooking other important properties. In this paper, we draw attention to the fact that for some applications, faithfulness to the original (input) text is important to preserve, even if it means introducing unusual language patterns in the (output) translation. We propose a simple, novel way to quantify whether an NMT system exhibits robustness or faithfulness, by focusing on the case of word-order perturbations. We explore a suite of functions to perturb the word order of source sentences without deleting or injecting tokens, and measure their effects on the target side. Across several experimental conditions, we observe a strong tendency towards robustness rather than faithfulness. These results ...
|
|
Keyword:
Computational Linguistics; Machine Learning; Machine Learning and Data Mining; Machine translation; Natural Language Processing; Neural Network
|
|
URL: https://dx.doi.org/10.48448/wwaf-em57 https://underline.io/lecture/39485-sometimes-we-want-ungrammatical-translations
|
|
BASE
|
|
Hide details
|
|
12 |
Pruning Neural Machine Translation for Speed Using Group Lasso ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Elementary-Level Math Word Problem Generation using Pre-Trained Transformers ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
The Low-Resource Double Bind: An Empirical Study of Pruning for Low-Resource Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Knowledge Graph Representation Learning using Ordinary Differential Equations ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
What Models Know About Their Attackers: Deriving Attacker Information From Latent Representations ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Mind the Context: The Impact of Contextualization in Neural Module Networks for Grounding Visual Referring Expressions ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
ProtoInfoMax: Prototypical Networks with Mutual Information Maximization for Out-of-Domain Detection ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|