a.scherbin
commited on
Commit
·
1c1ad30
1
Parent(s):
0ed3d55
Fix description
Browse files
README.md
CHANGED
@@ -5,10 +5,10 @@ tags:
|
|
5 |
- SpeechEnhancement
|
6 |
---
|
7 |
|
8 |
-
#
|
9 |
|
10 |
This repository contains the optimized version of [MP-SENet](https://github.com/yxlu-0102/MP-SENet) model.
|
11 |
-
Number of
|
12 |
|
13 |
## Optimization results
|
14 |
|
@@ -16,16 +16,16 @@ We use MACs as a latency measure because this metric is device-agnostic and impl
|
|
16 |
There is also a possibility to optimize a model by target device latency using ENOT neural architecture selection algorithm.
|
17 |
Please, keep in mind that acceleration by device latency differs from acceleration by MACs.
|
18 |
|
19 |
-
| **Model** | **MACs** | **
|
20 |
-
|
21 |
-
| baseline | 302.39 B |
|
22 |
-
| ENOT optimized | 120.95 B |
|
23 |
|
24 |
-
You can use `Baseline_model.pth` and `ENOT_optimized_model.pth` in the original repo by loading
|
25 |
```python
|
26 |
generator = torch.load("ENOT_optimized_model.pth")
|
27 |
```
|
28 |
|
29 |
-
|
30 |
|
31 |
If you want to book a demo, please contact us: [email protected] .
|
|
|
5 |
- SpeechEnhancement
|
6 |
---
|
7 |
|
8 |
+
# MP-SENet optimization on VoiceBank+DEMAND dataset with ENOT-AutoDL.
|
9 |
|
10 |
This repository contains the optimized version of [MP-SENet](https://github.com/yxlu-0102/MP-SENet) model.
|
11 |
+
Number of multiplication and addition operations (MACs) was used for computational complexity measurement. PESQ score was used as a quality metric.
|
12 |
|
13 |
## Optimization results
|
14 |
|
|
|
16 |
There is also a possibility to optimize a model by target device latency using ENOT neural architecture selection algorithm.
|
17 |
Please, keep in mind that acceleration by device latency differs from acceleration by MACs.
|
18 |
|
19 |
+
| **Model** | **MACs** | **Acceleration (MACs)** | PESQ score (the higher the better) |
|
20 |
+
|----------------|:--------:|:-----------------------:|:----------------------------------:|
|
21 |
+
| baseline | 302.39 B | 1.0 | 3.381 |
|
22 |
+
| ENOT optimized | 120.95 B | 2.5 | 3.386 |
|
23 |
|
24 |
+
You can use `Baseline_model.pth` and `ENOT_optimized_model.pth` in the original repo by loading a model as generator in the following way:
|
25 |
```python
|
26 |
generator = torch.load("ENOT_optimized_model.pth")
|
27 |
```
|
28 |
|
29 |
+
These two files contains a model objects, saved by `torch.save`, so you can load them only from the original repository root because of imports.
|
30 |
|
31 |
If you want to book a demo, please contact us: [email protected] .
|