Novaciano commited on
Commit
865aad9
·
verified ·
1 Parent(s): 953f442

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -38
README.md CHANGED
@@ -1,5 +1,8 @@
1
  ---
2
- base_model: Novaciano/Kraken-3.2-1B
 
 
 
3
  datasets:
4
  - Magpie-Align/Magpie-Reasoning-V2-250K-CoT-QwQ
5
  - ngxson/MiniThinky-dataset
@@ -27,12 +30,7 @@ datasets:
27
  - cognitivecomputations/samantha-data
28
  - m-a-p/CodeFeedback-Filtered-Instruction
29
  - m-a-p/Code-Feedback
30
- language:
31
- - es
32
- - en
33
  library_name: transformers
34
- license: apache-2.0
35
- pipeline_tag: text-generation
36
  tags:
37
  - mergekit
38
  - merge
@@ -47,50 +45,60 @@ tags:
47
  - nsfw
48
  - uncensored
49
  - not-for-all-audiences
50
- - llama-cpp
51
- - gguf-my-repo
 
 
 
52
  ---
 
 
53
 
54
- # Novaciano/Kraken-3.2-1B-Q4_0-GGUF
55
- This model was converted to GGUF format from [`Novaciano/Kraken-3.2-1B`](https://huggingface.co/Novaciano/Kraken-3.2-1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
56
- Refer to the [original model card](https://huggingface.co/Novaciano/Kraken-3.2-1B) for more details on the model.
57
 
58
- ## Use with llama.cpp
59
- Install llama.cpp through brew (works on Mac and Linux)
60
 
61
- ```bash
62
- brew install llama.cpp
 
63
 
64
- ```
65
- Invoke the llama.cpp server or the CLI.
66
 
67
- ### CLI:
68
- ```bash
69
- llama-cli --hf-repo Novaciano/Kraken-3.2-1B-Q4_0-GGUF --hf-file kraken-3.2-1b-q4_0.gguf -p "The meaning to life and the universe is"
70
- ```
71
 
72
- ### Server:
73
- ```bash
74
- llama-server --hf-repo Novaciano/Kraken-3.2-1B-Q4_0-GGUF --hf-file kraken-3.2-1b-q4_0.gguf -c 2048
75
- ```
76
 
77
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
78
 
79
- Step 1: Clone llama.cpp from GitHub.
80
- ```
81
- git clone https://github.com/ggerganov/llama.cpp
82
- ```
83
 
84
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
85
- ```
86
- cd llama.cpp && LLAMA_CURL=1 make
87
- ```
88
 
89
- Step 3: Run inference through the main binary.
90
  ```
91
- ./llama-cli --hf-repo Novaciano/Kraken-3.2-1B-Q4_0-GGUF --hf-file kraken-3.2-1b-q4_0.gguf -p "The meaning to life and the universe is"
 
 
 
 
92
  ```
93
- or
 
 
 
 
 
 
 
 
94
  ```
95
- ./llama-server --hf-repo Novaciano/Kraken-3.2-1B-Q4_0-GGUF --hf-file kraken-3.2-1b-q4_0.gguf -c 2048
 
 
 
 
96
  ```
 
 
1
  ---
2
+ base_model:
3
+ - Novaciano/La_Mejor_Mezcla-3.2-1B
4
+ - Novaciano/Jormungandr-3.2-1B
5
+ - cognitivecomputations/Dolphin3.0-Llama3.2-1B
6
  datasets:
7
  - Magpie-Align/Magpie-Reasoning-V2-250K-CoT-QwQ
8
  - ngxson/MiniThinky-dataset
 
30
  - cognitivecomputations/samantha-data
31
  - m-a-p/CodeFeedback-Filtered-Instruction
32
  - m-a-p/Code-Feedback
 
 
 
33
  library_name: transformers
 
 
34
  tags:
35
  - mergekit
36
  - merge
 
45
  - nsfw
46
  - uncensored
47
  - not-for-all-audiences
48
+ license: apache-2.0
49
+ language:
50
+ - es
51
+ - en
52
+ pipeline_tag: text-generation
53
  ---
54
+ # Kraken 3.2 1B GGUF 🐬
55
+ This model was converted from Novaciano/Kraken-3.2-1B.
56
 
57
+ The base model is Dolphin3.0-Llama3.2-1B.
 
 
58
 
59
+ Dolphin3.0-Llama3.2-1B has been curated and trained by [Eric Hartford](https://huggingface.co/ehartford), [Ben Gitter](https://huggingface.co/bigstorm), [BlouseJury](https://huggingface.co/BlouseJury) and [Cognitive Computations](https://huggingface.co/cognitivecomputations)
 
60
 
61
+ <center>
62
+ <img src="https://image.cdn2.seaart.me/2025-03-30/cvkd0rte878c739aikng/42c0e2df1e43585004b40cf43b794e68_high.webp">
63
+ </center>
64
 
65
+ ## GGUF Model Details
66
+ This GGUF convertion of Kraken-3.2-1B is intended to be an unofficial upgrade to the model Dolphin3.0-Llama3.2-1B.
67
 
68
+ Kraken 3.0 represents the cutting edge of instruct-tuned models in my dark collection of Llama 3.2. Designed to be the ultimate general-purpose local model, facilitating coding, mathematics, agentic tasks, function calling, and general use cases.
 
 
 
69
 
70
+ Kraken seeks to be a versatile model, similar to ChatGPT, Claude, and Gemini. However, these models present challenges for businesses looking to integrate AI into their products.
 
 
 
71
 
72
+ They maintain control over the system prompt, making changes that can disrupt software functionality. They manage model versions, sometimes altering them without notice or discontinuing older models that businesses depend on. They impose a uniform alignment, not tailored to specific applications. They can access your queries and potentially use the data in unintended ways.
73
 
74
+ In contrast, Kraken is customizable and gives control to the system owner. You define the system prompt, decide on the alignment, and control your data. Kraken does not impose its ethics or guidelines; you determine the guidelines. Kraken belongs to YOU, it is your tool, an extension of your will. You are responsible for your actions with Kraken, just as you are with any other tool.
 
 
 
75
 
76
+ ---
77
+ ## Chat Template
78
+
79
+ We use ChatML for the chat template.
80
 
 
81
  ```
82
+ <|im_start|>system
83
+ You are Kraken, a helpful AI assistant.<|im_end|>
84
+ <|im_start|>user
85
+ {prompt}<|im_end|>
86
+ <|im_start|>assistant
87
  ```
88
+
89
+ ## System Prompt
90
+
91
+ In Kraken, the system prompt is what you use to set the tone and alignment of the responses. You can set a character, a mood, rules for its behavior, and it will try its best to follow them.
92
+
93
+ Make sure to set the system prompt in order to set the tone and guidelines for the responses - Otherwise, it will act in a default way that might not be what you want.
94
+
95
+ Example use of system prompt:
96
+
97
  ```
98
+ <|im_start|>system
99
+ You are Kraken, a golang coding assistant. you only code in golang. If the user requests any other programming language, return the solution in golang instead.<|im_end|>
100
+ <|im_start|>user
101
+ Please implement A* using python<|im_end|>
102
+ <|im_start|>assistant
103
  ```
104
+ ---