DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...43
Hits 1 – 20 of 860

1
On Homophony and Rényi Entropy ...
BASE
Show details
2
Identity-Based Patterns in Deep Convolutional Networks: Generative Adversarial Phonology and Reduplication ...
BASE
Show details
3
Signed Coreference Resolution ...
BASE
Show details
4
Backtranslation in Neural Morphological Inflection ...
BASE
Show details
5
Rule-based Morphological Inflection Improves Neural Terminology Translation ...
BASE
Show details
6
Translating Headers of Tabular Data: A Pilot Study of Schema Translation ...
BASE
Show details
7
A Prototype Free/Open-Source Morphological Analyser and Generator for Sakha ...
BASE
Show details
8
Automatic Error Type Annotation for Arabic ...
BASE
Show details
9
Developing Conversational Data and Detection of Conversational Humor in Telugu ...
BASE
Show details
10
An Information-Theoretic Characterization of Morphological Fusion ...
BASE
Show details
11
Cross-document Event Identity via Dense Annotation ...
BASE
Show details
12
Navigating the Kaleidoscope of COVID-19 Misinformation Using Deep Learning ...
BASE
Show details
13
(Mis)alignment Between Stance Expressed in Social Media Data and Public Opinion Surveys ...
BASE
Show details
14
Adversarial Regularization as Stackelberg Game: An Unrolled Optimization Approach ...
Abstract: Anthology paper link: https://aclanthology.org/2021.emnlp-main.527/ Abstract: Adversarial regularization has been shown to improve the generalization performance of deep learning models in various natural language processing tasks. Existing works usually formulate the method as a zero-sum game, which is solved by alternating gradient descent/ascent algorithms. Such a formulation treats the adversarial and the defending players equally, which is undesirable because only the defending player contributes to the generalization performance. To address this issue, we propose Stackelberg Adversarial Regularization (SALT), which formulates adversarial regularization as a Stackelberg game. This formulation induces a competition between a leader and a follower, where the follower generates perturbations, and the leader trains the model subject to the perturbations. Different from conventional approaches, in SALT, the leader is in an advantageous position. When the leader moves, it recognizes the strategy of the ...
Keyword: Computational Linguistics; Deep Learning; Machine Learning; Machine Learning and Data Mining; Natural Language Processing; Natural Language Understanding
URL: https://dx.doi.org/10.48448/yvpq-xs21
https://underline.io/lecture/37710-adversarial-regularization-as-stackelberg-game-an-unrolled-optimization-approach
BASE
Hide details
15
Rewards with Negative Examples for Reinforced Topic-Focused Abstractive Summarization ...
BASE
Show details
16
Distantly-Supervised Named Entity Recognition with Noise-Robust Learning and Language Model Augmented Self-Training ...
BASE
Show details
17
Low-Resource Dialogue Summarization with Domain-Agnostic Multi-Source Pretraining ...
BASE
Show details
18
HittER: Hierarchical Transformers for Knowledge Graph Embeddings ...
BASE
Show details
19
Ara-Women-Hate: The first Arabic Hate Speech corpus regarding Women ...
BASE
Show details
20
Detecting Gender Bias using Explainability ...
BASE
Show details

Page: 1 2 3 4 5...43

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
860
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern