1 |
Joint Modeling of Code-Switched and Monolingual ASR via Conditional Factorization ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Source and Target Bidirectional Knowledge Distillation for End-to-end Speech Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Self-Guided Curriculum Learning for Neural Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Arabic Speech Recognition by End-to-End, Modular Systems and Human ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Leveraging End-to-End ASR for Endangered Language Documentation: An Empirical Study on Yoloxóchitl Mixtec ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Leveraging Pre-trained Language Model for Speech Sentiment Analysis ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
End-to-end ASR to jointly predict transcriptions and linguistic annotations ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Differentiable Allophone Graphs for Language-Universal Speech Recognition ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Speech Representation Learning Combining Conformer CPC with Deep Cluster for the ZeroSpeech Challenge 2021 ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
CHiME-6 Challenge: Tackling multispeaker speech recognition for unsegmented recordings
|
|
|
|
In: CHiME 2020 - 6th International Workshop on Speech Processing in Everyday Environments ; https://hal.inria.fr/hal-02546993 ; CHiME 2020 - 6th International Workshop on Speech Processing in Everyday Environments, May 2020, Barcelona / Virtual, Spain (2020)
|
|
BASE
|
|
Show details
|
|
12 |
Learning Speaker Embedding from Text-to-Speech ...
|
|
|
|
Abstract:
Zero-shot multi-speaker Text-to-Speech (TTS) generates target speaker voices given an input text and the corresponding speaker embedding. In this work, we investigate the effectiveness of the TTS reconstruction objective to improve representation learning for speaker verification. We jointly trained end-to-end Tacotron 2 TTS and speaker embedding networks in a self-supervised fashion. We hypothesize that the embeddings will contain minimal phonetic information since the TTS decoder will obtain that information from the textual input. TTS reconstruction can also be combined with speaker classification to enhance these embeddings further. Once trained, the speaker encoder computes representations for the speaker verification task, while the rest of the TTS blocks are discarded. We investigated training TTS from either manual or ASR-generated transcripts. The latter allows us to train embeddings on datasets without manual transcripts. We compared ASR transcripts and Kaldi phone alignments as TTS inputs, showing ...
|
|
Keyword:
Audio and Speech Processing eess.AS; FOS Computer and information sciences; FOS Electrical engineering, electronic engineering, information engineering; Machine Learning cs.LG; Sound cs.SD
|
|
URL: https://arxiv.org/abs/2010.11221 https://dx.doi.org/10.48550/arxiv.2010.11221
|
|
BASE
|
|
Hide details
|
|
14 |
A Comparative Study on Transformer vs RNN in Speech Applications ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Towards Online End-to-end Transformer Automatic Speech Recognition ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
The fifth 'CHiME' Speech Separation and Recognition Challenge: Dataset, task and baselines
|
|
|
|
In: Interspeech 2018 - 19th Annual Conference of the International Speech Communication Association ; https://hal.inria.fr/hal-01744021 ; Interspeech 2018 - 19th Annual Conference of the International Speech Communication Association, Sep 2018, Hyderabad, India (2018)
|
|
BASE
|
|
Show details
|
|
19 |
Analysis of Multilingual Sequence-to-Sequence speech recognition systems ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Language model integration based on memory control for sequence to sequence speech recognition ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|