Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -12,123 +12,50 @@ tags:
|
|
12 |
- cybersecurity
|
13 |
- llama-cpp
|
14 |
- gguf-my-repo
|
|
|
|
|
15 |
---
|
16 |
-
14/05/2025 Updated English dataset
|
17 |
-
|
18 |
-
# π€ StrikeGPT-R1-Zero: Cybersecurity Penetration Testing Reasoning Model
|
19 |
-
|
20 |
-
|
21 |
-

|
22 |
-
|
23 |
-
## π Model Introduction
|
24 |
-
**StrikeGPT-R1-Zero** is an expert model distilled through black-box methods based on **Qwen3**, with DeepSeek-R1 as its teacher model. Coverage includes:
|
25 |
-
π AI Security | π‘οΈ API Security | π± APP Security | π΅οΈ APT | π© CTF
|
26 |
-
π ICS Security | π» Full Penetration Testing | βοΈ Cloud Security | π Code Auditing
|
27 |
-
π¦ Antivirus Evasion | π Internal Network Security | πΎ Digital Forensics | βΏ Blockchain Security | π³οΈ Traceback & Countermeasures | π IoT Security
|
28 |
-
π¨ Emergency Response | π Vehicle Security | π₯ Social Engineering | πΌ Penetration Testing Interviews
|
29 |
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
- πͺ Base model uses Qwen3, making it more suitable for Chinese users compared to Distill-Llama
|
34 |
-
- β οΈ **No ethical restrictions**βdemonstrates unique performance in specific academic research areas (use in compliance with local laws)
|
35 |
-
- β¨ Outperforms local RAG solutions in scenarios like offline cybersecurity competitions, with superior logical reasoning and complex task handling
|
36 |
|
37 |
-
##
|
38 |
-
|
39 |
|
40 |
-
|
41 |
-
|
42 |
-
`ollama run hf.co/Bouquets/StrikeGPT-R1-Zero-8B-Q4_K_M-GGUF:Q4_K_M`
|
43 |
|
44 |
-
**Or directly call the original model**
|
45 |
-
```python
|
46 |
-
from unsloth import FastLanguageModel
|
47 |
-
import torch
|
48 |
-
max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!
|
49 |
-
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
|
50 |
-
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
|
51 |
-
|
52 |
-
model, tokenizer = FastLanguageModel.from_pretrained(
|
53 |
-
model_name = "Bouquets/StrikeGPT-R1-Zero-8B",
|
54 |
-
max_seq_length = max_seq_length,
|
55 |
-
dtype = dtype,
|
56 |
-
load_in_4bit = load_in_4bit,
|
57 |
-
# token = "hf_...",
|
58 |
-
)
|
59 |
-
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
|
60 |
-
|
61 |
-
### Instruction:
|
62 |
-
{}
|
63 |
-
|
64 |
-
### Input:
|
65 |
-
{}
|
66 |
-
|
67 |
-
### Response:
|
68 |
-
{}"""
|
69 |
-
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
|
70 |
-
inputs = tokenizer(
|
71 |
-
[
|
72 |
-
alpaca_prompt.format(
|
73 |
-
"", # instruction
|
74 |
-
"Hello, are you developed by OpenAI?", # input
|
75 |
-
"", # output - leave this blank for generation!
|
76 |
-
)
|
77 |
-
], return_tensors = "pt").to("cuda")
|
78 |
-
|
79 |
-
from transformers import TextStreamer
|
80 |
-
text_streamer = TextStreamer(tokenizer, skip_prompt = True)
|
81 |
-
_ = model.generate(input_ids = inputs.input_ids, attention_mask = inputs.attention_mask,
|
82 |
-
streamer = text_streamer, max_new_tokens = 4096, pad_token_id = tokenizer.eos_token_id)
|
83 |
```
|
84 |
-
|
85 |
-
|
86 |
-
*Self-awareness issues may occur after quantizationβplease disregard.*
|
87 |
-

|
88 |
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
π **Datasets** (Partial Non-Reasoning Data) π
|
95 |
-
π€ **HuggingFace**:
|
96 |
-
πΉ Cybersecurity LLM-CVE Dataset:
|
97 |
-
π [https://huggingface.co/datasets/Bouquets/Cybersecurity-LLM-CVE](https://huggingface.co/datasets/Bouquets/Cybersecurity-LLM-CVE)
|
98 |
-
|
99 |
-
πΉ Red Team LLM English Dataset:
|
100 |
-
π [https://huggingface.co/datasets/Bouquets/Cybersecurity-Red_team-LLM-en](https://huggingface.co/datasets/Bouquets/Cybersecurity-Red_team-LLM-en)
|
101 |
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-

|
107 |
-
Questions span:
|
108 |
-
Technical Depth (e.g., payload construction)
|
109 |
-
Attack Methodology (e.g., step-by-step exploitation)
|
110 |
-
Mitigation Strategies (e.g., parameterized queries)
|
111 |
-
**GPT-4 Evaluation Protocol**
|
112 |
-
- Responses from both models are anonymized and evaluated by GPT-4 using criteria:
|
113 |
-
- Technical Accuracy (0-5): Alignment with known penetration testing principles (e.g., OWASP guidelines).
|
114 |
-
- Logical Coherence (0-5): Consistency in reasoning (e.g., cause-effect relationships in attack chains).
|
115 |
-
- Practical Feasibility (0-5): Real-world applicability (e.g., compatibility with tools like Burp Suite).
|
116 |
-
- GPT-4 provides detailed justifications for scores
|
117 |
-
According to the standards, the evaluation results are finally presented in Figure 13.
|
118 |
-

|
119 |
|
120 |
-
|
121 |
-
Minor gradient explosions observed, but overall stable.
|
122 |
-

|
123 |
|
124 |
-
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-

|
129 |
|
130 |
-
|
131 |
-
|
132 |
-
|
|
|
133 |
|
134 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
- cybersecurity
|
13 |
- llama-cpp
|
14 |
- gguf-my-repo
|
15 |
+
- llama-cpp
|
16 |
+
- gguf-my-repo
|
17 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
+
# Bouquets/StrikeGPT-R1-Zero-8B-Q4_K_M-GGUF
|
20 |
+
This model was converted to GGUF format from [`Bouquets/StrikeGPT-R1-Zero-8B`](https://huggingface.co/Bouquets/StrikeGPT-R1-Zero-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
21 |
+
Refer to the [original model card](https://huggingface.co/Bouquets/StrikeGPT-R1-Zero-8B) for more details on the model.
|
|
|
|
|
|
|
22 |
|
23 |
+
## Use with llama.cpp
|
24 |
+
Install llama.cpp through brew (works on Mac and Linux)
|
25 |
|
26 |
+
```bash
|
27 |
+
brew install llama.cpp
|
|
|
28 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
```
|
30 |
+
Invoke the llama.cpp server or the CLI.
|
|
|
|
|
|
|
31 |
|
32 |
+
### CLI:
|
33 |
+
```bash
|
34 |
+
llama-cli --hf-repo Bouquets/StrikeGPT-R1-Zero-8B-Q4_K_M-GGUF --hf-file strikegpt-r1-zero-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
|
35 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
|
37 |
+
### Server:
|
38 |
+
```bash
|
39 |
+
llama-server --hf-repo Bouquets/StrikeGPT-R1-Zero-8B-Q4_K_M-GGUF --hf-file strikegpt-r1-zero-8b-q4_k_m.gguf -c 2048
|
40 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
|
42 |
+
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
|
|
|
|
43 |
|
44 |
+
Step 1: Clone llama.cpp from GitHub.
|
45 |
+
```
|
46 |
+
git clone https://github.com/ggerganov/llama.cpp
|
47 |
+
```
|
|
|
48 |
|
49 |
+
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
|
50 |
+
```
|
51 |
+
cd llama.cpp && LLAMA_CURL=1 make
|
52 |
+
```
|
53 |
|
54 |
+
Step 3: Run inference through the main binary.
|
55 |
+
```
|
56 |
+
./llama-cli --hf-repo Bouquets/StrikeGPT-R1-Zero-8B-Q4_K_M-GGUF --hf-file strikegpt-r1-zero-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
|
57 |
+
```
|
58 |
+
or
|
59 |
+
```
|
60 |
+
./llama-server --hf-repo Bouquets/StrikeGPT-R1-Zero-8B-Q4_K_M-GGUF --hf-file strikegpt-r1-zero-8b-q4_k_m.gguf -c 2048
|
61 |
+
```
|