license: mit
datasets:
- trendmicro-ailab/Primus-Reasoning
- trendmicro-ailab/Primus-Seed
- trendmicro-ailab/Primus-FineWeb
- trendmicro-ailab/Primus-Instruct
language:
- en
base_model:
- trendmicro-ailab/Llama-Primus-Merged
pipeline_tag: text-generation
library_name: transformers
tags:
- cybersecurity
- pretraining
extra_gated_fields:
Affiliation: text
Country: country
I want to use this model for:
type: select
options:
- Research
- Commercial
- label: Other
value: other
Job title:
type: select
options:
- Student
- Research graduate
- AI researcher
- AI developer/engineer
- Cybersecurity researcher
- Reporter
- Other
geo: ip_location
Primus: A Pioneering Collection of Open-Source Datasets for Cybersecurity LLM Training

First cybersecurity reasoning model!
TL;DR: Llama-Primus-Reasoning is a reasoning model distilled from the reasoning steps with reflection data generated by o1-preview & DeepSeek-R1 on cybersecurity tasks (Primus-Reasoning), based on Llama-Primus-Merged. It demonstrates a π15.8% improvement in security certification (CISSP).
π₯ For more details, please refer to the paper: [πPaper].
π’ News (2025/06/02): We have expanded the Primus-Reasoning dataset with additional samples from DeepSeek-R1. Accordingly, we have replaced Llama-Primus-Reasoning with a new version distilled jointly from DeepSeek-R1 and o1-preview. This version achieves the best CISSP performance, with a 15.8% improvement.
Introduction
Large Language Models (LLMs) have demonstrated remarkable versatility in recent years, with promising applications in specialized domains such as finance, law, and biomedicine. However, in the domain of cybersecurity, we noticed a lack of open-source datasets specifically designed for LLM pre-trainingβeven though much research has shown that LLMs acquire their knowledge during pre-training. To fill this gap, we present a collection of datasets covering multiple stages of cybersecurity LLM training, including pre-training (Primus-Seed and Primus-FineWeb), instruction fine-tuning (Primus-Instruct), and reasoning data for distillation (Primus-Reasoning). Based on these datasets and Llama-3.1-8B-Instruct, we developed Llama-Primus-Base, Llama-Primus-Merged, and Llama-Primus-Reasoning. This model card is Llama-Primus-Reasoning.
Note: No TrendMicro customer information is included.
Cybersecurity Benchmark Results
Model | CISSP | Avg. Tokens |
---|---|---|
w/o CoT, 5-shot | ||
Llama-3.1-8B-Instruct | 0.7073 | 1 |
Llama-Primus-Merged | 0.7191 β1.67% | 1 |
w/ CoT, 0-shot | ||
Llama-3.1-8B-Instruct | 0.7288 β3.03% | 279.69 |
ββ + Distilled from o1-preview | 0.7583 β7.21% | 646.94 |
ββ + Distilled from DeepSeek-R1 | 0.7859 β11.1% | 1667.56 |
ββ + Distilled from (o1 + R1) | 0.7780 β10.0% | 1615.54 |
Llama-Primus-Merged | 0.7603 β7.49% | 241.92 |
ββ + Distilled from o1-preview | 0.7780 β10.0% | 726.96 |
ββ + Distilled from DeepSeek-R1 | 0.8075 β14.2% | 1483.94 |
ββ + Distilled from (o1 + R1) | 0.8193 β15.8% | 1467.40 |
Raw Models for Comparison | ||
o1-preview | 0.8035 | 1054.91 |
DeepSeek-R1 | 0.8212 | 1229.32 |
DeepSeek-R1-Distill-Llama-8B | 0.7399 β4.61% | 1542.10 |
Effect of Primus-Reasoning fine-tuning, evaluated on CISSP. β indicates the percentage improvement over Llama without CoT and in the 5-shot setting. The best improvement is highlighted in bold.
About Primus
Primus is Trend Micro's pioneering family of lightweight, state-of-the-art open cybersecurity language models and datasets. Developed through our cutting-edge research initiatives and advanced technology, these resources share the innovative foundation that powers our enterprise-class Trend Cybertron solution. As an industry leader in cybersecurity, Trend Micro is proud to contribute these powerful, efficiency-optimized models and datasets to the community, while maintaining the excellence and reliability that define our global security standards.
License
This model is based on the MIT license, but you must also comply with the Llama 3.1 Community License Agreement.