Edit model card

Model Card for Model ID

This repository is a backup for the evaluator in https://github.com/Princeton-SysML/Jailbreak_LLM, which is used for safety evaluation of LLM responses especially for the MaliciousInstruct benchmark.

Citation

@article{huang2023catastrophic,
  title={Catastrophic Jailbreak of Open-source LLMs via Exploiting Generation},
  author={Huang, Yangsibo and Gupta, Samyak and Xia, Mengzhou and Li, Kai and Chen, Danqi},
  journal={arXiv preprint arXiv:2310.06987},
  year={2023}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Model tree for engie/maliciousinstructtest

Finetuned
(1913)
this model

Dataset used to train engie/maliciousinstructtest