DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5
Hits 1 – 20 of 81

1
MAGIC DUST FOR CROSS-LINGUAL ADAPTATION OF MONOLINGUAL WAV2VEC-2.0
In: ICASSP 2022 ; https://hal.archives-ouvertes.fr/hal-03544515 ; ICASSP 2022, May 2022, Singapour, Singapore (2022)
BASE
Show details
2
Simple and Effective Unsupervised Speech Synthesis ...
BASE
Show details
3
Learning Audio-Video Language Representations
Rouditchenko, Andrew. - : Massachusetts Institute of Technology, 2021
BASE
Show details
4
Cascaded Multilingual Audio-Visual Learning from Videos ...
BASE
Show details
5
Magic dust for cross-lingual adaptation of monolingual wav2vec-2.0 ...
BASE
Show details
6
Text-Free Image-to-Speech Synthesis Using Learned Segmental Units ...
BASE
Show details
7
Exposure Bias versus Self-Recovery: Are Distortions Really Incremental for Autoregressive Text Generation? ...
BASE
Show details
8
Mitigating Biases in Toxic Language Detection through Invariant Rationalization ...
BASE
Show details
9
Mitigating Biases in Toxic Language Detection through Invariant Rationalization ...
BASE
Show details
10
A Convolutional Deep Markov Model for Unsupervised Speech Representation Learning
In: Interspeech 2020 ; https://hal.archives-ouvertes.fr/hal-02912029 ; Interspeech 2020, Oct 2020, Shanghai, China (2020)
BASE
Show details
11
Similarity Analysis of Contextual Word Representation Models ...
BASE
Show details
12
CSTNet: Contrastive Speech Translation Network for Self-Supervised Speech Representation Learning ...
BASE
Show details
13
A Convolutional Deep Markov Model for Unsupervised Speech Representation Learning ...
BASE
Show details
14
What Was Written vs. Who Read It: News Media Profiling Using Text Analysis and Social Media Context ...
BASE
Show details
15
Vector-Quantized Autoregressive Predictive Coding ...
Chung, Yu-An; Tang, Hao; Glass, James. - : arXiv, 2020
Abstract: Autoregressive Predictive Coding (APC), as a self-supervised objective, has enjoyed success in learning representations from large amounts of unlabeled data, and the learned representations are rich for many downstream tasks. However, the connection between low self-supervised loss and strong performance in downstream tasks remains unclear. In this work, we propose Vector-Quantized Autoregressive Predictive Coding (VQ-APC), a novel model that produces quantized representations, allowing us to explicitly control the amount of information encoded in the representations. By studying a sequence of increasingly limited models, we reveal the constituents of the learned representations. In particular, we confirm the presence of information with probing tasks, while showing the absence of information with mutual information, uncovering the model's preference in preserving speech information as its capacity becomes constrained. We find that there exists a point where phonetic and speaker information are amplified to ...
Keyword: Audio and Speech Processing eess.AS; Computation and Language cs.CL; FOS Computer and information sciences; FOS Electrical engineering, electronic engineering, information engineering; Machine Learning cs.LG; Sound cs.SD
URL: https://arxiv.org/abs/2005.08392
https://dx.doi.org/10.48550/arxiv.2005.08392
BASE
Hide details
16
Non-Autoregressive Predictive Coding for Learning Speech Representations from Local Dependencies ...
BASE
Show details
17
Improved Speech Representations with Multi-Target Autoregressive Predictive Coding ...
Chung, Yu-An; Glass, James. - : arXiv, 2020
BASE
Show details
18
Classifying Alzheimer's Disease Using Audio and Text-Based Representations of Speech
In: Frontiers (2020)
BASE
Show details
19
Identification of digital voice biomarkers for cognitive health
In: Explor Med (2020)
BASE
Show details
20
On the Linguistic Representational Power of Neural Machine Translation Models
In: Computational Linguistics, Vol 46, Iss 1, Pp 1-52 (2020) (2020)
BASE
Show details

Page: 1 2 3 4 5

Catalogues
1
0
6
0
0
0
0
Bibliographies
12
0
0
0
0
0
0
0
1
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
65
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern