Safetensors
clip

Model Initialized from laion/CLIP-ViT-bigG-14-laion2B-39B-b160k. The text encoder is finetuned with LEAF at $k=1$ with $\rho=50$.

To load this model use:

from transformers import CLIPProcessor, CLIPModel

model_name = "LEAF-CLIP/OpenCLIP-ViT-bigG-rho50-k1"
processor_name = "laion/CLIP-ViT-bigG-14-laion2B-39B-b160k"

model = CLIPModel.from_pretrained(model_name)
processor = CLIPProcessor.from_pretrained(processor_name)
Downloads last month
5
Safetensors
Model size
2.54B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for LEAF-CLIP/OpenCLIP-ViT-bigG-rho50-k1

Finetuned
(3)
this model

Datasets used to train LEAF-CLIP/OpenCLIP-ViT-bigG-rho50-k1