Transformers
PyTorch
clip
Inference Endpoints

Model Summary

NLLB-CLIP is a model that combines a text encoder from the NLLB model and an image encoder from the standard CLIP. This allows us to extend the model capabilities to 201 languages of the Flores-200. NLLB-CLIP sets state-of-the-art on the Crossmodal-3600 dataset by performing very well on low-resource languages. You can find more details about the model in the paper.

How to use

The model repo contains the model code files that allow the use of NLLB-CLIP as any other model from the hub. The interface is also compatible with CLIP models. Example code is below:

from transformers import AutoTokenizer, CLIPProcessor
import requests
from PIL import Image

from modeling_nllb_clip import NLLBCLIPModel # local file from the repo

processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
processor = processor.image_processor
tokenizer = AutoTokenizer.from_pretrained(
    "facebook/nllb-200-distilled-600M"
)
image_path = "https://huggingface.co/spaces/jjourney1125/swin2sr/resolve/main/samples/butterfly.jpg"
image = Image.open(requests.get(image_path, stream=True).raw)
image_inputs = processor(images=image, return_tensors="pt")
text_inputs = tokenizer(
    ["cat", "dog", "butterfly"],
    padding="longest",
    return_tensors="pt",
)

hf_model = NLLBCLIPModel.from_pretrained("visheratin/nllb-clip-base")

outputs = hf_model(input_ids = text_inputs.input_ids, attention_mask = text_inputs.attention_mask, pixel_values=image_inputs.pixel_values)

Acknowledgements

I thank Lambda Cloud for providing compute resources to train the model.

Downloads last month
305
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Dataset used to train visheratin/nllb-clip-base

Space using visheratin/nllb-clip-base 1