Add the quantitative comparisons of BiRefNet_dynamics and previous BiRefNet models for genenral use.
Browse files
README.md
CHANGED
@@ -19,6 +19,19 @@ license: mit
|
|
19 |
> An arbitrary shape adaptable BiRefNet for general segmentation.
|
20 |
> This model was trained on arbitrary shapes (256x256 ~ 2304x2304) and shows great robustness on inputs with any shape.
|
21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
<div align='center'>
|
23 |
<a href='https://scholar.google.com/citations?user=TZRzWOsAAAAJ' target='_blank'><strong>Peng Zheng</strong></a><sup> 1,4,5,6</sup>, 
|
24 |
<a href='https://scholar.google.com/citations?user=0uPb8MMAAAAJ' target='_blank'><strong>Dehong Gao</strong></a><sup> 2</sup>, 
|
|
|
19 |
> An arbitrary shape adaptable BiRefNet for general segmentation.
|
20 |
> This model was trained on arbitrary shapes (256x256 ~ 2304x2304) and shows great robustness on inputs with any shape.
|
21 |
|
22 |
+
### Performance
|
23 |
+
> How it looks when compared with BiRefNet-general (fixed 1024x1024 resolution) -- greater than BiRefNet-general and BiRefNet_HR-general on the reserved validation sets (DIS-VD and TE-P3M-500-NP).
|
24 |
+
> The `dynamic_XXxXX` means this BiRefNet_dynamic model was being tested in various input resolutions for the evaluation.
|
25 |
+
|
26 |
+
|
27 |
+

|
28 |
+
|
29 |
+

|
30 |
+
|
31 |
+

|
32 |
+
|
33 |
+
For performance of different epochs, check the [eval_results-xxx folder for it](https://drive.google.com/drive/u/0/folders/1J79uL4xBaT3uct-tYtWZHKS2SoVE2cqu) on my google drive.
|
34 |
+
|
35 |
<div align='center'>
|
36 |
<a href='https://scholar.google.com/citations?user=TZRzWOsAAAAJ' target='_blank'><strong>Peng Zheng</strong></a><sup> 1,4,5,6</sup>, 
|
37 |
<a href='https://scholar.google.com/citations?user=0uPb8MMAAAAJ' target='_blank'><strong>Dehong Gao</strong></a><sup> 2</sup>, 
|