import torchvision.transforms.v2 as T

image_size = 384
preprocessor = T.Compose(
    [
        T.Resize(
            size=None,
            max_size=image_size,
            interpolation=T.InterpolationMode.NEAREST,
        ),
        T.Pad(
            padding=image_size // 2,
            fill=0,  # black
        ),
        T.CenterCrop(
            size=(image_size, image_size),
        ),
        T.ToDtype(dtype=torch.float32, scale=True), # 0~255 -> 0~1
    ]
)
Downloads last month
14
Safetensors
Model size
93.2M params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for p1atdev/style_250412.vit_base_patch16_siglip_384.v2_webli

Finetuned
(1)
this model

Spaces using p1atdev/style_250412.vit_base_patch16_siglip_384.v2_webli 2