-
UniME-V2: MLLM-as-a-Judge for Universal Multimodal Embedding Learning
Paper • 2510.13515 • Published • 11 -
TianchengGu/UniME-V2-LLaVA-OneVision-8B
Image-Text-to-Text • 8B • Updated • 13 • 2 -
TianchengGu/UniME-V2-Qwen2VL-7B
Image-Text-to-Text • 8B • Updated • 22 • 2 -
TianchengGu/UniME-V2-Qwen2VL-2B
Image-Text-to-Text • 2B • Updated • 44 • 2
Yang
Kaichengalex
AI & ML interests
None yet
Recent Activity
upvoted
a
paper
1 day ago
ProCLIP: Progressive Vision-Language Alignment via LLM-based Embedder
commented on
a paper
1 day ago
ProCLIP: Progressive Vision-Language Alignment via LLM-based Embedder
liked
a dataset
2 days ago
OpenGVLab/AS-V2
Organizations
UniME
-
DeepGlint-AI/UniME-Phi3.5-V-4.2B
Image-Text-to-Text • Updated • 122 • 7 -
DeepGlint-AI/UniME-LLaVA-OneVision-7B
Image-Text-to-Text • 8B • Updated • 206 • 3 -
DeepGlint-AI/UniME-LLaVA-1.6-7B
Image-Text-to-Text • 8B • Updated • 233 • 5 -
Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs
Paper • 2504.17432 • Published • 39
RealSyn Dataset
-
Kaichengalex/RealSyn100M
Viewer • Updated • 89.6M • 796 • 15 -
Kaichengalex/RealSyn15M
Viewer • Updated • 13.5M • 102 • 3 -
Kaichengalex/RealSyn30M
Viewer • Updated • 27M • 67 • 4 -
RealSyn: An Effective and Scalable Multimodal Interleaved Document Transformation Paradigm
Paper • 2502.12513 • Published • 16
SFT Dataset
RWKV-CLIP
Web-Person Dataset
Vision-Language Dataset
MLLM4Embedding
-
GME: Improving Universal Multimodal Retrieval by Multimodal LLMs
Paper • 2412.16855 • Published • 5 -
VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks
Paper • 2410.05160 • Published • 4 -
VLM2Vec-V2: Advancing Multimodal Embedding for Videos, Images, and Visual Documents
Paper • 2507.04590 • Published • 16
UniME-V2
-
UniME-V2: MLLM-as-a-Judge for Universal Multimodal Embedding Learning
Paper • 2510.13515 • Published • 11 -
TianchengGu/UniME-V2-LLaVA-OneVision-8B
Image-Text-to-Text • 8B • Updated • 13 • 2 -
TianchengGu/UniME-V2-Qwen2VL-7B
Image-Text-to-Text • 8B • Updated • 22 • 2 -
TianchengGu/UniME-V2-Qwen2VL-2B
Image-Text-to-Text • 2B • Updated • 44 • 2
RWKV-CLIP
UniME
-
DeepGlint-AI/UniME-Phi3.5-V-4.2B
Image-Text-to-Text • Updated • 122 • 7 -
DeepGlint-AI/UniME-LLaVA-OneVision-7B
Image-Text-to-Text • 8B • Updated • 206 • 3 -
DeepGlint-AI/UniME-LLaVA-1.6-7B
Image-Text-to-Text • 8B • Updated • 233 • 5 -
Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs
Paper • 2504.17432 • Published • 39
Web-Person Dataset
RealSyn Dataset
-
Kaichengalex/RealSyn100M
Viewer • Updated • 89.6M • 796 • 15 -
Kaichengalex/RealSyn15M
Viewer • Updated • 13.5M • 102 • 3 -
Kaichengalex/RealSyn30M
Viewer • Updated • 27M • 67 • 4 -
RealSyn: An Effective and Scalable Multimodal Interleaved Document Transformation Paradigm
Paper • 2502.12513 • Published • 16
Vision-Language Dataset
SFT Dataset
MLLM4Embedding
-
GME: Improving Universal Multimodal Retrieval by Multimodal LLMs
Paper • 2412.16855 • Published • 5 -
VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks
Paper • 2410.05160 • Published • 4 -
VLM2Vec-V2: Advancing Multimodal Embedding for Videos, Images, and Visual Documents
Paper • 2507.04590 • Published • 16