-
mvp-lab/LLaVA-OneVision-1.5-Instruct-Data
Viewer • Updated • 21.5M • 273k • 54 -
mvp-lab/LLaVA-OneVision-1.5-Mid-Training-85M
Viewer • Updated • 86.8M • 278k • 46 -
lmms-lab/LLaVA-OneVision-1.5-8B-Instruct
Image-Text-to-Text • 9B • Updated • 7.1k • 48 -
lmms-lab/LLaVA-OneVision-1.5-4B-Instruct
Image-Text-to-Text • 5B • Updated • 4.16k • 11
AI & ML interests
multi-modal foundation models
Recent Activity
View all activity
-
mvp-lab/LLaVA-OneVision-1.5-Instruct-Data
Viewer • Updated • 21.5M • 273k • 54 -
mvp-lab/LLaVA-OneVision-1.5-Mid-Training-85M
Viewer • Updated • 86.8M • 278k • 46 -
lmms-lab/LLaVA-OneVision-1.5-8B-Instruct
Image-Text-to-Text • 9B • Updated • 7.1k • 48 -
lmms-lab/LLaVA-OneVision-1.5-4B-Instruct
Image-Text-to-Text • 5B • Updated • 4.16k • 11
models
0
None public yet