--- library_name: araclip tags: - clip - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: https://github.com/Arabic-Clip/Araclip.git ## How to use ``` pip install git+https://github.com/Arabic-Clip/Araclip.git ``` ```python # load model import numpy as np from PIL import Image from araclip import AraClip model = AraClip.from_pretrained("Arabic-Clip/araclip") # data labels = ["قطة جالسة", "قطة تقفز" ,"كلب", "حصان"] image = Image.open("cat.png") # embed data image_features = model.embed(image=image) text_features = np.stack([model.embed(text=label) for label in labels]) # search for most similar data similarities = text_features @ image_features best_match = labels[np.argmax(similarities)] print(f"The image is most similar to: {best_match}") # قطة جالسة ``` ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6527e89a8808d80ccff88b7a/d5i4ItET9AZN9xgv8ify5.png)