Glow-WaveGAN: Learning Speech Representations from GAN-based Auto-encoder For High Fidelity Flow-based Speech Synthesis

Jian Cong1, Shan Yang2, Lei Xie1, Dan Su2
1 Audio, Speech and Language Processing Group (ASLP@NPU), School of Computer Science, Northwestern Polytechnical University, Xi'an, China
2 Tencent AI Lab, China

Abstract

Current two-stage TTS framework typically integrates an acoustic model with a vocoder -- the acoustic model predicts a low resolution intermediate representation such as Mel-spectrum while the vocoder generates waveform from the intermediate representation. Although using the intermediate representation as a bridge, there still exists critical mismatch between the acoustic model and the vocoder as they are commonly separately learned and work on different distributions of representation, leading to artifacts in the synthesized speech. In this work, different from using predesigned intermediate representation in most previous works, we propose to use VAE combining with GAN to learn a latent representation directly from speech and then utilize a flow-based acoustic model to model the distribution of the latent representation from text. In this way, the mismatch problem is migrated as the two stages work on the same distribution. Results demonstrate that the flow-based acoustic model can exactly model the distribution of our learned speech representation and the proposed TTS framework, namely Glow-WaveGAN, can produce high fidelity speech outperforming the state-of-the-art GAN-based model.

Contents

Annotation: The inner-GAN indicates that the decoder in our VAE and the discriminators are used as a GAN-based vocoder, which receives Mel-spectrum as input. WaveGAN means the VAE + GAN model, which can be used to reconstruct input speech.


1. Single speaker (LJSpeech)

1.1 Reconstruction to waveform from speech representations

Ground Truth Hifi-GAN (Mel) Inner-GAN (Mel) WaveGAN

1.2 End-to-end Speech Synthesis

Ground Truth Glow-TTS + Hifi-GAN (Mel) Glow-TTS + Inner-GAN (Mel) Glow-WaveGAN (Z)

2. Multi-spekaer (VCTK)

2.1 Reconstruction to waveform from speech representations

Ground Truth Hifi-GAN (Mel) Inner-GAN (Mel) WaveGAN

1.2 End-to-end Speech Synthesis

Ground Truth Glow-TTS + Hifi-GAN (Mel) Glow-TTS + Inner-GAN (Mel) Glow-WaveGAN (Z)

2.3 Synthesis for unseen speaekrs

Ground Truth Glow-TTS + Hifi-GAN (Mel) Glow-TTS + Inner-GAN (Mel) Glow-WaveGAN (Z)

3. Additional comparison

3.1 Compare with the hifigan demos

We compare the demos of our proposed Glow-WaveGAN with the demos from official HiFi-GAN (https://jik876.github.io/hifi-gan-demo/)

Ground Truth Tacotron2 + Hifi-GAN Tacotron2 + Hifi-GAN(fine-tuned) Glow-WaveGAN

3.2 Compare with the glow-tts demos

We compare the demos of our proposed Glow-WaveGAN with the demos from official Glow-TTS (https://jaywalnut310.github.io/glow-tts-demo/)

Ground Truth Glow-TTS + WaveGlow Glow-WaveGAN