DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...30
Hits 1 – 20 of 600

1
Priorless Recurrent Networks Learn Curiously ...
BASE
Show details
2
Character Alignment in Morphologically Complex Translation Sets for Related Languages ...
BASE
Show details
3
Composing Byte-Pair Encodings for Morphological Sequence Classification ...
BASE
Show details
4
Variation in Universal Dependencies annotation: A token based typological case study on adpossessive constructions ...
BASE
Show details
5
Corpus evidence for word order freezing in Russian and German ...
BASE
Show details
6
An analysis of language models for metaphor recognition ...
BASE
Show details
7
Noise Isn't Always Negative: Countering Exposure Bias in Sequence-to-Sequence Inflection Models ...
BASE
Show details
8
Exhaustive Entity Recognition for Coptic - Challenges and Solutions ...
BASE
Show details
9
Imagining Grounded Conceptual Representations from Perceptual Information in Situated Guessing Games ...
BASE
Show details
10
Attentively Embracing Noise for Robust Latent Representation in BERT ...
Abstract: Modern digital personal assistants interact with users through voice. Therefore, they heavily rely on automatic speech recognition (ASR) in order to convert speech to text and perform further tasks. We introduce EBERT, which stands for EmbraceBERT, with the goal of extracting more robust latent representations for the task of noisy ASR text classification. Conventionally, BERT is fine-tuned for downstream classification tasks using only the [CLS] starter token, with the remaining tokens being discarded. We propose using all encoded transformer tokens and further encode them using a novel attentive embracement layer and multi-head attention layer. This approach uses the otherwise discarded tokens as a source of additional information and the multi-head attention in conjunction with the attentive embracement layer to select important features from clean data during training. This allows for the extraction of a robust latent vector resulting in improved classification performance during testing when presented ...
Keyword: Computer and Information Science; Natural Language Processing; Neural Network
URL: https://underline.io/lecture/6216-attentively-embracing-noise-for-robust-latent-representation-in-bert
https://dx.doi.org/10.48448/z3ex-gx97
BASE
Hide details
11
Catching Attention with Automatic Pull Quote Selection ...
BASE
Show details
12
Opening Ceremony ...
BASE
Show details
13
Classifier Probes May Just Learn from Linear Context Features ...
BASE
Show details
14
Seeing the world through text: Evaluating image descriptions for commonsense reasoning in machine reading comprehension ...
BASE
Show details
15
Part 6 - Cross-linguistic Studies ...
BASE
Show details
16
Manifold Learning-based Word Representation Refinement Incorporating Global and Local Information ...
BASE
Show details
17
HMSid and HMSid2 at PARSEME Shared Task 2020: Computational Corpus Linguistics and unseen-in-training MWEs ...
BASE
Show details
18
Multi-dialect Arabic BERT for Country-level Dialect Identification ...
BASE
Show details
19
Autoencoding Improves Pre-trained Word Embeddings ...
BASE
Show details
20
Exploring End-to-End Differentiable Natural Logic Modeling ...
BASE
Show details

Page: 1 2 3 4 5...30

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
600
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern