MLLM-Safety-Study Collection This is the collection for CVPR 2025 paper: Do we really need curated malicious data for safety alignment in multi-modal large language models? • 3 items • Updated Apr 27