import torchvision.transforms.v2 as T
image_size = 384
preprocessor = T.Compose(
[
T.Resize(
size=None,
max_size=image_size,
interpolation=T.InterpolationMode.NEAREST,
),
T.Pad(
padding=image_size // 2,
fill=0, # black
),
T.CenterCrop(
size=(image_size, image_size),
),
T.ToDtype(dtype=torch.float32, scale=True), # 0~255 -> 0~1
]
)
- Downloads last month
- 14
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for p1atdev/style_250412.vit_base_patch16_siglip_384.v2_webli
Base model
timm/vit_base_patch16_siglip_384.v2_webli