Malicious Code Test Model

⚠️ Security Warning

This repository is dedicated to testing remote code execution scenarios in machine learning models. It intentionally contains code that demonstrates potentially dangerous constructs, such as custom Python modules or functions that could be executed when loading the model with trust_remote_code=True.

Do NOT use this model in production or on machines with sensitive data. This repository is strictly for research and testing purposes.

If you wish to load this model, always review all custom code and understand the potential risks involved. Proceed only if you fully trust the code and the environment.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("ryomo/malicious-code-test", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ryomo/malicious-code-test")

# Generate text
prompt = "This is a test of the malicious code model."
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(inputs, max_new_tokens=20, temperature=0.7)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)

License

This project is open source and available under the MIT License.

Downloads last month
1
Safetensors
Model size
9.38M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support