Update README.md
Browse files
README.md
CHANGED
|
@@ -9,7 +9,7 @@ tags:
|
|
| 9 |
|
| 10 |
# Model Card for Model ID
|
| 11 |
|
| 12 |
-
[HuggingFace 🤗 - Repository](https://huggingface.co/Respair/
|
| 13 |
|
| 14 |
**DDP is very un-stable, please use the single-gpu training script** - if you still want to do it, I suggest uncommenting the grad clipping lines; that should help a lot.
|
| 15 |
|
|
@@ -25,14 +25,14 @@ Huge Thanks to [Johnathan Duering](https://github.com/duerig) for his help. I mo
|
|
| 25 |
**NOTE**:
|
| 26 |
|
| 27 |
There are Three checkpoints so far in this repository:
|
| 28 |
-
-
|
| 29 |
-
-
|
| 30 |
-
- HiFTNet 44.1khz (trained for ~100K steps, on a similar dataset to
|
| 31 |
1. Python >= 3.10
|
| 32 |
2. Clone this repository:
|
| 33 |
```bash
|
| 34 |
-
git clone https://github.com/Respaired/
|
| 35 |
-
cd
|
| 36 |
```
|
| 37 |
3. Install python requirements:
|
| 38 |
```bash
|
|
@@ -46,4 +46,4 @@ CUDA_VISIBLE_DEVICES=0 python train_single_gpu.py --config config_v1.json --[arg
|
|
| 46 |
For the F0 model training, please refer to [yl4579/PitchExtractor](https://github.com/yl4579/PitchExtractor). This repo includes a pre-trained F0 model on a Mixture of Multilingual data for the previously mentioned configuration. I'm going to quote the HiFTnet's Author: "Still, you may want to train your own F0 model for the best performance, particularly for noisy or non-speech data, as we found that F0 estimation accuracy is essential for the vocoder performance."
|
| 47 |
|
| 48 |
## Inference
|
| 49 |
-
Please refer to the notebook [inference.ipynb](https://github.com/Respaired/
|
|
|
|
| 9 |
|
| 10 |
# Model Card for Model ID
|
| 11 |
|
| 12 |
+
[HuggingFace 🤗 - Repository](https://huggingface.co/Respair/RiFornet_Vocoder)
|
| 13 |
|
| 14 |
**DDP is very un-stable, please use the single-gpu training script** - if you still want to do it, I suggest uncommenting the grad clipping lines; that should help a lot.
|
| 15 |
|
|
|
|
| 25 |
**NOTE**:
|
| 26 |
|
| 27 |
There are Three checkpoints so far in this repository:
|
| 28 |
+
- RiFornet 24khz (trained for roughly 117K~ steps on LibriTTS (360 + 100) and 40 hours of other English datasets.)
|
| 29 |
+
- RiFornet 44.1khz (trained for roughly 280K~ steps on a Large (more than 1100 hours) private Multilingual dataset, covering Arabic, Persian, Japanese, English, Russian and also Singing voice in Chinese and Japanese with Quranic recitations in Arabic.
|
| 30 |
+
- HiFTNet 44.1khz (trained for ~100K steps, on a similar dataset to RiFornet 44.1khz, but slightly smaller and no singing voice).
|
| 31 |
1. Python >= 3.10
|
| 32 |
2. Clone this repository:
|
| 33 |
```bash
|
| 34 |
+
git clone https://github.com/Respaired/RiFornet_Vocoder
|
| 35 |
+
cd RiFornet_Vocoder/Ringformer
|
| 36 |
```
|
| 37 |
3. Install python requirements:
|
| 38 |
```bash
|
|
|
|
| 46 |
For the F0 model training, please refer to [yl4579/PitchExtractor](https://github.com/yl4579/PitchExtractor). This repo includes a pre-trained F0 model on a Mixture of Multilingual data for the previously mentioned configuration. I'm going to quote the HiFTnet's Author: "Still, you may want to train your own F0 model for the best performance, particularly for noisy or non-speech data, as we found that F0 estimation accuracy is essential for the vocoder performance."
|
| 47 |
|
| 48 |
## Inference
|
| 49 |
+
Please refer to the notebook [inference.ipynb](https://github.com/Respaired/RiFornet_Vocoder/blob/main/RingFormer/inference.ipynb) for details.
|