1 |
Deep Neural Convolutive Matrix Factorization for Articulatory Representation Decomposition ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Cross-Lingual Text-to-Speech Using Multi-Task Learning and Speaker Classifier Joint Training ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Improving the fusion of acoustic and text representations in RNN-T ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Automatic Depression Detection: An Emotional Audio-Textual Corpus and a GRU/BiLSTM-based Model ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Separate What You Describe: Language-Queried Audio Source Separation ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Chain-based Discriminative Autoencoders for Speech Recognition ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Unsupervised word-level prosody tagging for controllable speech synthesis ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
gTLO: A Generalized and Non-linear Multi-Objective Deep Reinforcement Learning Approach ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Cetacean Translation Initiative: a roadmap to deciphering the communication of sperm whales ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Improving End-To-End Modeling for Mispronunciation Detection with Effective Augmentation Mechanisms ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
An Improved StarGAN for Emotional Voice Conversion: Enhancing Voice Quality and Data Augmentation ...
|
|
|
|
Abstract:
Emotional Voice Conversion (EVC) aims to convert the emotional style of a source speech signal to a target style while preserving its content and speaker identity information. Previous emotional conversion studies do not disentangle emotional information from emotion-independent information that should be preserved, thus transforming it all in a monolithic manner and generating audio of low quality, with linguistic distortions. To address this distortion problem, we propose a novel StarGAN framework along with a two-stage training process that separates emotional features from those independent of emotion by using an autoencoder with two encoders as the generator of the Generative Adversarial Network (GAN). The proposed model achieves favourable results in both the objective evaluation and the subjective evaluation in terms of distortion, which reveals that the proposed model can effectively reduce distortion. Furthermore, in data augmentation experiments for end-to-end speech emotion recognition, the ... : Accepted by Interspeech 2021 ...
|
|
Keyword:
Artificial Intelligence cs.AI; Audio and Speech Processing eess.AS; FOS Computer and information sciences; FOS Electrical engineering, electronic engineering, information engineering; Sound cs.SD
|
|
URL: https://dx.doi.org/10.48550/arxiv.2107.08361 https://arxiv.org/abs/2107.08361
|
|
BASE
|
|
Hide details
|
|
16 |
Speech2Slot: An End-to-End Knowledge-based Slot Filling from Speech ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
NIST SRE CTS Superset: A large-scale dataset for telephony speaker recognition ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Interpreting intermediate convolutional layers of CNNs trained on raw speech ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
A multispeaker dataset of raw and reconstructed speech production real-time MRI video and 3D volumetric images ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
A multispeaker dataset of raw and reconstructed speech production real-time MRI video and 3D volumetric images ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|