MLLM-Safety-Study
Collection
This is the collection for CVPR 2025 paper: Do we really need curated malicious data for safety alignment in multi-modal large language models?
•
3 items
•
Updated
Base model
liuhaotian/llava-v1.5-7b