1 |
USCORE: An Effective Approach to Fully Unsupervised Evaluation Metrics for Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Constrained Density Matching and Modeling for Cross-lingual Alignment of Contextualized Representations ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Towards Explainable Evaluation Metrics for Natural Language Generation ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
End-to-end style-conditioned poetry generation: What does it take to learn from examples alone? ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Changes in European Solidarity Before and During COVID-19: Evidence from a Large Crowd- and Expert-Annotated Twitter Dataset ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
BERT-Defense: A Probabilistic Model Based on BERT to Combat Cognitively Inspired Orthographic Adversarial Attacks ...
|
|
|
|
Abstract:
Adversarial attacks expose important blind spots of deep learning systems. While word- and sentence-level attack scenarios mostly deal with finding semantic paraphrases of the input that fool NLP models, character-level attacks typically insert typos into the input stream. It is commonly thought that these are easier to defend via spelling correction modules. In this work, we show that both a standard spellchecker and the approach of Pruthi et al. (2019), which trains to defend against insertions, deletions and swaps, perform poorly on the character-level benchmark recently proposed in Eger and Benz (2020) which includes more challenging attacks such as visual and phonetic perturbations and missing word segmentations. In contrast, we show that an untrained iterative approach which combines context-independent character-level information with context-dependent information from BERT's masked language modeling can perform on par with human crowd-workers from Amazon Mechanical Turk (AMT) supervised via 3-shot ... : Findings of ACL 2021 ...
|
|
Keyword:
Computation and Language cs.CL; FOS Computer and information sciences; Machine Learning cs.LG
|
|
URL: https://dx.doi.org/10.48550/arxiv.2106.01452 https://arxiv.org/abs/2106.01452
|
|
BASE
|
|
Hide details
|
|
8 |
Global Explainability of BERT-Based Evaluation Metrics by Disentangling along Linguistic Factors ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Global Explainability of BERT-Based Evaluation Metrics by Disentangling along Linguistic Factors ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Inducing Language-Agnostic Multilingual Representations ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Probing Multilingual BERT for Genetic and Typological Signals ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
On the Limitations of Cross-lingual Encoders as Exposed by Reference-Free Machine Translation Evaluation ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
How to Probe Sentence Embeddings in Low-Resource Languages: On Structural Design Choices for Probing Task Evaluation ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Vec2Sent: Probing Sentence Embeddings With Natural Language Generation ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
From Hero to Zéroe: A Benchmark of Low-Level Adversarial Attacks ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
On the limitations of cross-lingual encoders as exposed by reference-free machine translation evaluation
|
|
|
|
BASE
|
|
Show details
|
|
17 |
On aligning OpenIE extractions with Knowledge Bases: A case study
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Semantic Change and Emerging Tropes In a Large Corpus of New High German Poetry ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Cross-lingual Argumentation Mining: Machine Translation (and a bit of Projection) is All You Need! ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
What is the Essence of a Claim? Cross-Domain Claim Identification ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|