Safetensors
qwen2

CulturalPangea-7B Model Card

Grounding Multilingual Multimodal LLMs With Cultural Knowledge

๐ŸŒ ๐Ÿ‡ฉ๐Ÿ‡ช ๐Ÿ‡ซ๐Ÿ‡ท ๐Ÿ‡ฌ๐Ÿ‡ง ๐Ÿ‡ช๐Ÿ‡ธ ๐Ÿ‡ฎ๐Ÿ‡น ๐Ÿ‡ต๐Ÿ‡ฑ ๐Ÿ‡ท๐Ÿ‡บ ๐Ÿ‡จ๐Ÿ‡ฟ ๐Ÿ‡ฏ๐Ÿ‡ต ๐Ÿ‡บ๐Ÿ‡ฆ ๐Ÿ‡ง๐Ÿ‡ท ๐Ÿ‡ฎ๐Ÿ‡ณ ๐Ÿ‡จ๐Ÿ‡ณ ๐Ÿ‡ณ๐Ÿ‡ด ๐Ÿ‡ต๐Ÿ‡น ๐Ÿ‡ฎ๐Ÿ‡ฉ ๐Ÿ‡ฎ๐Ÿ‡ฑ ๐Ÿ‡น๐Ÿ‡ท ๐Ÿ‡ฌ๐Ÿ‡ท ๐Ÿ‡ท๐Ÿ‡ด ๐Ÿ‡ฎ๐Ÿ‡ท ๐Ÿ‡น๐Ÿ‡ผ ๐Ÿ‡ฒ๐Ÿ‡ฝ ๐Ÿ‡ฎ๐Ÿ‡ช ๐Ÿ‡ฐ๐Ÿ‡ท ๐Ÿ‡ง๐Ÿ‡ฌ ๐Ÿ‡น๐Ÿ‡ญ ๐Ÿ‡ณ๐Ÿ‡ฑ ๐Ÿ‡ช๐Ÿ‡ฌ ๐Ÿ‡ต๐Ÿ‡ฐ ๐Ÿ‡ณ๐Ÿ‡ฌ ๐Ÿ‡ฎ๐Ÿ‡ฉ ๐Ÿ‡ป๐Ÿ‡ณ ๐Ÿ‡ฒ๐Ÿ‡พ ๐Ÿ‡ธ๐Ÿ‡ฆ ๐Ÿ‡ฎ๐Ÿ‡ฉ ๐Ÿ‡ง๐Ÿ‡ฉ ๐Ÿ‡ธ๐Ÿ‡ฌ ๐Ÿ‡ฑ๐Ÿ‡ฐ ๐Ÿ‡ฐ๐Ÿ‡ช ๐Ÿ‡ฒ๐Ÿ‡ณ ๐Ÿ‡ช๐Ÿ‡น ๐Ÿ‡น๐Ÿ‡ฟ ๐Ÿ‡ท๐Ÿ‡ผ

๐Ÿ  Homepage | ๐Ÿค– CulturalPangea-7B | ๐Ÿ“Š CulturalGround | ๐Ÿ’ป Github | ๐Ÿ“„ Arxiv

[IMAGE]

Model Details

  • Model: CulturalPangea-7B is an open-source Multilingual Multimodal LLM fine-tuned to interpret and reason about long-tail cultural entities and concepts. It is designed to bridge the cultural gap often present in MLLMs.
  • Date: CulturalPangea-7B was trained in 2025.
  • Training Dataset: The model was fine-tuned on the CulturalGround dataset, using 14 million open-ended and 6 million multiple-choice culturally-grounded VQA pairs samples from 30M total samples(22M OE, 8M MCQs). This was interleaved with the substantial portion of original Pangea instruction data to maintain general abilities.
  • Architecture: CulturalPangea-7B is a fine-tuned version of Pangea-7B. It uses a frozen CLIP-ViT vision encoder with a Qwen2-7B-Instruct LLM backbone. During training, only the connector and the language model were fine-tuned.

Uses

CulturalPangea-7B follows the same architecture and usage patterns as LLaVA-NeXT and Pangea-7B.

Direct Use

First, you need to clone and install the LLaVA-NeXT repository.

git clone [https://github.com/LLaVA-VL/LLaVA-NeXT](https://github.com/LLaVA-VL/LLaVA-NeXT)
cd LLaVA-NeXT
pip install -e ".[train]"

Then, you can load CulturalPangea-7B using the following code:

from llava.model.builder import load_pretrained_model
model_path = 'neulab/CulturalPangea-7B'
model_name = 'CulturalPangea-7B-qwen'
args = {"multimodal": True}
tokenizer, model, image_processor, context_len = load_pretrained_model(model_path, None, model_name, **args)

Defining helper functions for model inference:

import torch
from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN
from llava.utils import disable_torch_init
from llava.constants import IGNORE_INDEX, DEFAULT_IMAGE_TOKEN, IMAGE_TOKEN_INDEX
from typing import Dict
import transformers
import re
from PIL import Image
def preprocess_qwen(sources, tokenizer: transformers.PreTrainedTokenizer, has_image: bool = False, max_len=2048, system_message: str = "You are a helpful assistant.") -> Dict:
    roles = {"human": "<|im_start|>user", "gpt": "<|im_start|>assistant"}
    im_start, im_end = tokenizer.additional_special_tokens_ids
    nl_tokens = tokenizer("\n").input_ids
    _system = tokenizer("system").input_ids + nl_tokens
    _user = tokenizer("user").input_ids + nl_tokens
    _assistant = tokenizer("assistant").input_ids + nl_tokens
    input_ids = []
    source = sources
    if roles[source[0]["from"]] != roles["human"]: source = source[1:]
    input_id, target = [], []
    system = [im_start] + _system + tokenizer(system_message).input_ids + [im_end] + nl_tokens
    input_id += system
    target += [im_start] + [IGNORE_INDEX] * (len(system) - 3) + [im_end] + nl_tokens
    assert len(input_id) == len(target)
    for j, sentence in enumerate(source):
        role = roles[sentence["from"]]
        if has_image and sentence["value"] is not None and "<image>" in sentence["value"]:
            num_image = len(re.findall(DEFAULT_IMAGE_TOKEN, sentence["value"]))
            texts = sentence["value"].split('<image>')
            _input_id = tokenizer(role).input_ids + nl_tokens 
            for i,text in enumerate(texts):
                _input_id += tokenizer(text).input_ids 
                if i<len(texts)-1: _input_id += [IMAGE_TOKEN_INDEX] + nl_tokens
            _input_id += [im_end] + nl_tokens
            assert sum([i==IMAGE_TOKEN_INDEX for i in _input_id])==num_image
        else:
            if sentence["value"] is None: _input_id = tokenizer(role).input_ids + nl_tokens
            else: _input_id = tokenizer(role).input_ids + nl_tokens + tokenizer(sentence["value"]).input_ids + [im_end] + nl_tokens
        input_id += _input_id
    input_ids.append(input_id)
    return torch.tensor(input_ids, dtype=torch.long)
def generate_output(prompt, image=None, do_sample=False, temperature=0, top_p=0.5, num_beams=1, max_new_tokens=1024):
    image_tensors = []
    prompt = "<image>\n" + prompt
    # image can be a path to a local file or a PIL image
    if isinstance(image, str):
        image = Image.open(image)
    image_tensor = image_processor.preprocess(image, return_tensors='pt')['pixel_values']
    image_tensors.append(image_tensor.half().cuda())
    input_ids = preprocess_qwen([{'from': 'human', 'value': prompt},{'from': 'gpt','value': None}], tokenizer, has_image=True).cuda()
    with torch.inference_mode():
        output_ids = model.generate(
            input_ids,
            images=image_tensors,
            do_sample=do_sample,
            temperature=temperature,
            top_p=top_p,
            num_beams=num_beams,
            max_new_tokens=max_new_tokens,
            use_cache=True
        )
    outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0]
    outputs = outputs.strip()
    return outputs

An example of multimodal inference:

prompt = "What cultural significance does the landmark in the image hold?"
image = "image.png"
print(generate_output(prompt, image=image))

Citing the Model

If you use CulturalPangea or the CulturalGround dataset, please cite our work:

@preprint{nyandwi2025grounding,
  title={Grounding Multilingual Multimodal LLMs With Cultural Knowledge},
  author={Nyandwi, Jean de Dieu and Song, Yueqi and Khanuja, Simran and Neubig, Graham},
  year={2025}
}
Downloads last month
27
Safetensors
Model size
7.94B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for neulab/CulturalPangea-7B

Base model

Qwen/Qwen2-7B
Finetuned
neulab/Pangea-7B
Finetuned
(1)
this model

Dataset used to train neulab/CulturalPangea-7B