Triangle104 commited on
Commit
07d6746
·
verified ·
1 Parent(s): 613cb1c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -0
README.md CHANGED
@@ -12,6 +12,18 @@ tags:
12
  This model was converted to GGUF format from [`Ba2han/Qwen-3-14B-Gemini-v0.1`](https://huggingface.co/Ba2han/Qwen-3-14B-Gemini-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
  Refer to the [original model card](https://huggingface.co/Ba2han/Qwen-3-14B-Gemini-v0.1) for more details on the model.
14
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ## Use with llama.cpp
16
  Install llama.cpp through brew (works on Mac and Linux)
17
 
 
12
  This model was converted to GGUF format from [`Ba2han/Qwen-3-14B-Gemini-v0.1`](https://huggingface.co/Ba2han/Qwen-3-14B-Gemini-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
  Refer to the [original model card](https://huggingface.co/Ba2han/Qwen-3-14B-Gemini-v0.1) for more details on the model.
14
 
15
+ ---
16
+ Use "You are an assistant with reasoning capabilities." system message to trigger Gemini-style thinking.
17
+
18
+ Training Dataset
19
+ -
20
+ The fine-tuning dataset consists of ~300 diverse examples, 160 of which are directly from Gemini 2.5 Pro.
21
+
22
+ Model
23
+ -
24
+ Trained on unsloth version of Qwen3-14B (instruct). Keep in mind that it's slightly overfit since the training dataset was quite small.
25
+
26
+ ---
27
  ## Use with llama.cpp
28
  Install llama.cpp through brew (works on Mac and Linux)
29