MLCD-ViT-bigG Model Card

MLCD-ViT-bigG is a state-of-the-art vision transformer model enhanced with 2D Rotary Position Embedding (RoPE2D), achieving superior performance on document understanding and visual question answering tasks. Developed by DeepGlint AI, this model demonstrates exceptional capabilities in processing complex visual-language interactions.

We adopted the official LLaVA-NeXT and the official training dataset LLaVA-NeXT-Data for evaluating the foundational visual models.

Vision Tower RoPE2D ChartQA DocVQA InfoVQA OCRBench MMMU
CLIP (ViT-L-14-336px) × 66.52 75.21 38.88 525.00 44.20
SigLIP (ViT-SO400M-384px) × 69.28 76.71 41.38 554.00 46.78
DFN5B (ViT-H-14-378px) × 64.36 70.87 38.59 473.00 48.00
MLCD (ViT-L-14-336px) × 67.84 76.46 43.48 531.00 44.30
MLCD (ViT-bigG-14-336px) √ 71.07 79.63 44.38 572.00 46.78

Installation

pip install torch transformers
git clone https://github.com/deepglint/unicom
cd unicom/mlcd

Usage

from vit_rope2d_hf import MLCDVisionModel
from transformers import CLIPImageProcessor
from PIL import Image
import requests
import torch

# Load model and processor
model = MLCDVisionModel.from_pretrained("DeepGlint-AI/mlcd-vit-bigG-patch14-336")
processor = CLIPImageProcessor.from_pretrained("DeepGlint-AI/mlcd-vit-bigG-patch14-336")

# Process single image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")

# Get visual features
with torch.no_grad():
    outputs = model(**inputs)
features = outputs.last_hidden_state

print(f"Extracted features shape: {features.shape}")
# Extracted features shape: torch.Size([1, 577, 1664])
Downloads last month
2
Safetensors
Model size
1.84B params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Collection including DeepGlint-AI/mlcd-vit-bigG-patch14-336