This model has been pushed to the Hub using the PytorchModelHubMixin integration:

How to use

pip install git+https://github.com/Arabic-Clip/Araclip.git
# load model
import numpy as np
from PIL import Image
from araclip import AraClip
model = AraClip.from_pretrained("Arabic-Clip/araclip")

# data
labels = ["ู‚ุทุฉ ุฌุงู„ุณุฉ", "ู‚ุทุฉ ุชู‚ูุฒ" ,"ูƒู„ุจ", "ุญุตุงู†"]
image = Image.open("cat.png")

# embed data 
image_features = model.embed(image=image)
text_features = np.stack([model.embed(text=label) for label in labels])

# search for most similar data
similarities = text_features @ image_features
best_match = labels[np.argmax(similarities)]

print(f"The image is most similar to: {best_match}")
# ู‚ุทุฉ ุฌุงู„ุณุฉ

image/png

Downloads last month
46
Safetensors
Model size
340M params
Tensor type
F32
ยท
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Spaces using Arabic-Clip/araclip 2