Model Card for Model ID
This repository is a backup for the evaluator in Jailbreak_LLM, which is used for safety evaluation of LLM responses especially for the MaliciousInstruct benchmark.
Citation
@article{huang2023catastrophic,
title={Catastrophic Jailbreak of Open-source LLMs via Exploiting Generation},
author={Huang, Yangsibo and Gupta, Samyak and Xia, Mengzhou and Li, Kai and Chen, Danqi},
journal={arXiv preprint arXiv:2310.06987},
year={2023}
}
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.
Model tree for LibrAI/maliciousinstruct-evaluator
Base model
google-bert/bert-base-cased