|
--- |
|
tags: |
|
- image-classification |
|
- timm |
|
- transformers |
|
- animetimm |
|
- dghs-imgutils |
|
library_name: timm |
|
license: gpl-3.0 |
|
datasets: |
|
- animetimm/danbooru-wdtagger-v4-w640-ws-full |
|
base_model: |
|
- timm/resnet101.tv_in1k |
|
--- |
|
|
|
# Anime Tagger resnet101.dbv4-full |
|
|
|
## Model Details |
|
|
|
- **Model Type:** Multilabel Image classification / feature backbone |
|
- **Model Stats:** |
|
- Params: 68.1M |
|
- FLOPs / MACs: 46.0G / 22.9G |
|
- Image size: train = 384 x 384, test = 384 x 384 |
|
- **Dataset:** [animetimm/danbooru-wdtagger-v4-w640-ws-full](https://huggingface.co/datasets/animetimm/danbooru-wdtagger-v4-w640-ws-full) |
|
- Tags Count: 12476 |
|
- General (#0) Tags Count: 9225 |
|
- Character (#4) Tags Count: 3247 |
|
- Rating (#9) Tags Count: 4 |
|
|
|
## Results |
|
|
|
| # | [email protected] (F1/MCC/P/R) | [email protected] (F1/MCC/P/R) | Macro@Best (F1/P/R) | |
|
|:----------:|:-----------------------------:|:-----------------------------:|:---------------------:| |
|
| Validation | 0.436 / 0.448 / 0.535 / 0.395 | 0.622 / 0.622 / 0.672 / 0.578 | --- | |
|
| Test | 0.437 / 0.448 / 0.535 / 0.396 | 0.622 / 0.623 / 0.672 / 0.579 | 0.481 / 0.509 / 0.482 | |
|
|
|
* `Macro/[email protected]` means the metrics on the threshold 0.40. |
|
* `Macro@Best` means the mean metrics on the tag-level thresholds on each tags, which should have the best F1 scores. |
|
|
|
## Thresholds |
|
|
|
| Category | Name | Alpha | Threshold | Micro@Thr (F1/P/R) | [email protected] (F1/P/R) | Macro@Best (F1/P/R) | |
|
|:----------:|:---------:|:-------:|:-----------:|:---------------------:|:---------------------:|:---------------------:| |
|
| 0 | general | 1 | 0.33 | 0.612 / 0.619 / 0.605 | 0.305 / 0.421 / 0.262 | 0.357 / 0.374 / 0.374 | |
|
| 4 | character | 1 | 0.49 | 0.845 / 0.906 / 0.791 | 0.812 / 0.858 / 0.777 | 0.833 / 0.893 / 0.789 | |
|
| 9 | rating | 1 | 0.4 | 0.800 / 0.755 / 0.851 | 0.805 / 0.778 / 0.837 | 0.806 / 0.771 / 0.848 | |
|
|
|
* `Micro@Thr` means the metrics on the category-level suggested thresholds, which are listed in the table above. |
|
* `[email protected]` means the metrics on the threshold 0.40. |
|
* `Macro@Best` means the metrics on the tag-level thresholds on each tags, which should have the best F1 scores. |
|
|
|
For tag-level thresholds, you can find them in [selected_tags.csv](https://huggingface.co/animetimm/resnet101.dbv4-full/resolve/main/selected_tags.csv). |
|
|
|
## How to Use |
|
|
|
We provided a sample image for our code samples, you can find it [here](https://huggingface.co/animetimm/resnet101.dbv4-full/blob/main/sample.webp). |
|
|
|
### Use TIMM And Torch |
|
|
|
Install [dghs-imgutils](https://github.com/deepghs/imgutils), [timm](https://github.com/huggingface/pytorch-image-models) and other necessary requirements with the following command |
|
|
|
```shell |
|
pip install 'dghs-imgutils>=0.17.0' torch huggingface_hub timm pillow pandas |
|
``` |
|
|
|
After that you can load this model with timm library, and use it for train, validation and test, with the following code |
|
|
|
```python |
|
import json |
|
|
|
import pandas as pd |
|
import torch |
|
from huggingface_hub import hf_hub_download |
|
from imgutils.data import load_image |
|
from imgutils.preprocess import create_torchvision_transforms |
|
from timm import create_model |
|
|
|
repo_id = 'animetimm/resnet101.dbv4-full' |
|
model = create_model(f'hf-hub:{repo_id}', pretrained=True) |
|
model.eval() |
|
|
|
with open(hf_hub_download(repo_id=repo_id, repo_type='model', filename='preprocess.json'), 'r') as f: |
|
preprocessor = create_torchvision_transforms(json.load(f)['test']) |
|
# Compose( |
|
# PadToSize(size=(512, 512), interpolation=bilinear, background_color=white) |
|
# Resize(size=384, interpolation=bilinear, max_size=None, antialias=True) |
|
# CenterCrop(size=[384, 384]) |
|
# MaybeToTensor() |
|
# Normalize(mean=tensor([0.4850, 0.4560, 0.4060]), std=tensor([0.2290, 0.2240, 0.2250])) |
|
# ) |
|
|
|
image = load_image('https://huggingface.co/animetimm/resnet101.dbv4-full/resolve/main/sample.webp') |
|
input_ = preprocessor(image).unsqueeze(0) |
|
# input_, shape: torch.Size([1, 3, 384, 384]), dtype: torch.float32 |
|
with torch.no_grad(): |
|
output = model(input_) |
|
prediction = torch.sigmoid(output)[0] |
|
# output, shape: torch.Size([1, 12476]), dtype: torch.float32 |
|
# prediction, shape: torch.Size([12476]), dtype: torch.float32 |
|
|
|
df_tags = pd.read_csv( |
|
hf_hub_download(repo_id=repo_id, repo_type='model', filename='selected_tags.csv'), |
|
keep_default_na=False |
|
) |
|
tags = df_tags['name'] |
|
mask = prediction.numpy() >= df_tags['best_threshold'] |
|
print(dict(zip(tags[mask].tolist(), prediction[mask].tolist()))) |
|
# {'general': 0.5100178718566895, |
|
# 'sensitive': 0.5034157037734985, |
|
# '1girl': 0.9962267875671387, |
|
# 'solo': 0.9669082760810852, |
|
# 'looking_at_viewer': 0.8127952814102173, |
|
# 'blush': 0.7912614941596985, |
|
# 'smile': 0.9032713770866394, |
|
# 'short_hair': 0.7837649583816528, |
|
# 'shirt': 0.5146411657333374, |
|
# 'long_sleeves': 0.7224600315093994, |
|
# 'brown_hair': 0.5260339379310608, |
|
# 'holding': 0.5752436518669128, |
|
# 'dress': 0.5642756223678589, |
|
# 'closed_mouth': 0.4826013743877411, |
|
# 'purple_eyes': 0.7590888142585754, |
|
# 'flower': 0.9180877208709717, |
|
# 'braid': 0.9453270435333252, |
|
# 'red_hair': 0.8512048721313477, |
|
# 'blunt_bangs': 0.5289319753646851, |
|
# 'bob_cut': 0.22592417895793915, |
|
# 'plant': 0.5463797450065613, |
|
# 'blue_flower': 0.6992892026901245, |
|
# 'crown_braid': 0.7925195097923279, |
|
# 'potted_plant': 0.5136846899986267, |
|
# 'flower_pot': 0.4357028007507324, |
|
# 'wiping_tears': 0.3059103488922119} |
|
``` |
|
### Use ONNX Model For Inference |
|
|
|
Install [dghs-imgutils](https://github.com/deepghs/imgutils) with the following command |
|
|
|
```shell |
|
pip install 'dghs-imgutils>=0.17.0' |
|
``` |
|
|
|
Use `multilabel_timm_predict` function with the following code |
|
|
|
```python |
|
from imgutils.generic import multilabel_timm_predict |
|
|
|
general, character, rating = multilabel_timm_predict( |
|
'https://huggingface.co/animetimm/resnet101.dbv4-full/resolve/main/sample.webp', |
|
repo_id='animetimm/resnet101.dbv4-full', |
|
fmt=('general', 'character', 'rating'), |
|
) |
|
|
|
print(general) |
|
# {'1girl': 0.9962266683578491, |
|
# 'solo': 0.96690833568573, |
|
# 'braid': 0.9453268647193909, |
|
# 'flower': 0.9180880784988403, |
|
# 'smile': 0.9032710790634155, |
|
# 'red_hair': 0.8512046337127686, |
|
# 'looking_at_viewer': 0.8127949833869934, |
|
# 'crown_braid': 0.792519211769104, |
|
# 'blush': 0.7912609577178955, |
|
# 'short_hair': 0.7837648391723633, |
|
# 'purple_eyes': 0.7590886354446411, |
|
# 'long_sleeves': 0.7224597930908203, |
|
# 'blue_flower': 0.6992897391319275, |
|
# 'holding': 0.5752434134483337, |
|
# 'dress': 0.5642745494842529, |
|
# 'plant': 0.5463811755180359, |
|
# 'blunt_bangs': 0.5289315581321716, |
|
# 'brown_hair': 0.5260326862335205, |
|
# 'shirt': 0.5146413445472717, |
|
# 'potted_plant': 0.5136858820915222, |
|
# 'closed_mouth': 0.48260119557380676, |
|
# 'flower_pot': 0.4357031583786011, |
|
# 'wiping_tears': 0.30590835213661194, |
|
# 'bob_cut': 0.22592449188232422} |
|
print(character) |
|
# {} |
|
print(rating) |
|
# {'general': 0.5100165009498596, 'sensitive': 0.5034170150756836} |
|
``` |
|
|
|
For further information, see [documentation of function multilabel_timm_predict](https://dghs-imgutils.deepghs.org/main/api_doc/generic/multilabel_timm.html#multilabel-timm-predict). |
|
|
|
|