Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,55 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- allenai/tulu-3-sft-personas-instruction-following
|
4 |
+
- simplescaling/s1K-1.1
|
5 |
+
- simplescaling/s1K-claude-3-7-sonnet
|
6 |
+
- FreedomIntelligence/Medical-R1-Distill-Data
|
7 |
+
- OpenCoder-LLM/opc-sft-stage1
|
8 |
+
- cognitivecomputations/SystemChat-2.0
|
9 |
+
- anthracite-org/kalo-opus-instruct-22k-no-refusal
|
10 |
+
- allura-org/scienceqa_sharegpt
|
11 |
+
- KodCode/KodCode-V1-SFT-R1
|
12 |
+
license: apache-2.0
|
13 |
+
language:
|
14 |
+
- en
|
15 |
+
- zh
|
16 |
+
base_model:
|
17 |
+
- mistralai/Mistral-Small-24B-Base-2501
|
18 |
+
library_name: transformers
|
19 |
+
tags:
|
20 |
+
- instruct
|
21 |
+
- conversational
|
22 |
+
---
|
23 |
+
|
24 |
+
# Sertraline 24b
|
25 |
+
|
26 |
+

|
28 |
+
|
29 |
+
## About
|
30 |
+
|
31 |
+
An *actually decent* instruct SFT tune of Mistral Small 3.
|
32 |
+
|
33 |
+
## System Prompts
|
34 |
+
|
35 |
+
I tested with the following Claude-like system prompts, however they were not trained in and any similar prompts can likely be used:
|
36 |
+
|
37 |
+
### Non-Reasoning
|
38 |
+
```
|
39 |
+
You are Claude, a helpful and harmless AI assistant created by Anthropic.
|
40 |
+
```
|
41 |
+
|
42 |
+
### Reasoning
|
43 |
+
```
|
44 |
+
You are Claude, a helpful and harmless AI assistant created by Anthropic. Please contain all your thoughts in <think> </think> tags, and your final response right after the closing </think> tag.
|
45 |
+
```
|
46 |
+
|
47 |
+
For reasoning, it's recommended to force the thinking (by prefilling `<think>\n` on the newest assistant response), as well as not including previous thought blocks in new requests.
|
48 |
+
|
49 |
+
## Instruct Template
|
50 |
+
|
51 |
+
v7-Tekken, same as the original instruct model.
|
52 |
+
|
53 |
+
## Dataset
|
54 |
+
|
55 |
+
This model was trained on [allura-org/inkstructmix-v0.2.1](https://hf.co/datasets/allura-org/inkstructmix-v0.2.1).
|