2 |
Character Alignment in Morphologically Complex Translation Sets for Related Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Composing Byte-Pair Encodings for Morphological Sequence Classification ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Variation in Universal Dependencies annotation: A token based typological case study on adpossessive constructions ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Corpus evidence for word order freezing in Russian and German ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Noise Isn't Always Negative: Countering Exposure Bias in Sequence-to-Sequence Inflection Models ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Exhaustive Entity Recognition for Coptic - Challenges and Solutions ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Imagining Grounded Conceptual Representations from Perceptual Information in Situated Guessing Games ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Attentively Embracing Noise for Robust Latent Representation in BERT ...
|
|
|
|
Abstract:
Modern digital personal assistants interact with users through voice. Therefore, they heavily rely on automatic speech recognition (ASR) in order to convert speech to text and perform further tasks. We introduce EBERT, which stands for EmbraceBERT, with the goal of extracting more robust latent representations for the task of noisy ASR text classification. Conventionally, BERT is fine-tuned for downstream classification tasks using only the [CLS] starter token, with the remaining tokens being discarded. We propose using all encoded transformer tokens and further encode them using a novel attentive embracement layer and multi-head attention layer. This approach uses the otherwise discarded tokens as a source of additional information and the multi-head attention in conjunction with the attentive embracement layer to select important features from clean data during training. This allows for the extraction of a robust latent vector resulting in improved classification performance during testing when presented ...
|
|
Keyword:
Computer and Information Science; Natural Language Processing; Neural Network
|
|
URL: https://underline.io/lecture/6216-attentively-embracing-noise-for-robust-latent-representation-in-bert https://dx.doi.org/10.48448/z3ex-gx97
|
|
BASE
|
|
Hide details
|
|
13 |
Classifier Probes May Just Learn from Linear Context Features ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Seeing the world through text: Evaluating image descriptions for commonsense reasoning in machine reading comprehension ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Manifold Learning-based Word Representation Refinement Incorporating Global and Local Information ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
HMSid and HMSid2 at PARSEME Shared Task 2020: Computational Corpus Linguistics and unseen-in-training MWEs ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Multi-dialect Arabic BERT for Country-level Dialect Identification ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Exploring End-to-End Differentiable Natural Logic Modeling ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|