chaanks commited on
Commit
40a07f1
1 Parent(s): dae100a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -20,7 +20,7 @@ datasets:
20
  This repository provides all the necessary tools for using a [scalable HiFiGAN Unit](https://arxiv.org/abs/2406.10735) vocoder trained with [LibriTTS](https://www.openslr.org/141/).
21
 
22
  The pre-trained model take as input discrete self-supervised representations and produces a waveform as output. This is suitable for a wide range of generative tasks such as speech enhancement, separation, text-to-speech, voice cloning, etc. Please read [DASB - Discrete Audio and Speech Benchmark](https://arxiv.org/abs/2406.14294) for more information.
23
- To generate the discrete self-supervised representations, we employ a K-means clustering model trained on `microsoft/wavlm-large` hidden layers, with k=1000.
24
 
25
  ## Install SpeechBrain
26
 
 
20
  This repository provides all the necessary tools for using a [scalable HiFiGAN Unit](https://arxiv.org/abs/2406.10735) vocoder trained with [LibriTTS](https://www.openslr.org/141/).
21
 
22
  The pre-trained model take as input discrete self-supervised representations and produces a waveform as output. This is suitable for a wide range of generative tasks such as speech enhancement, separation, text-to-speech, voice cloning, etc. Please read [DASB - Discrete Audio and Speech Benchmark](https://arxiv.org/abs/2406.14294) for more information.
23
+ To generate the discrete self-supervised representations, we employ a K-means clustering model trained using `microsoft/wavlm-large` hidden layers, with k=1000.
24
 
25
  ## Install SpeechBrain
26