Triangle104 commited on
Commit
c1b019d
·
verified ·
1 Parent(s): 50fec78

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -0
README.md CHANGED
@@ -71,6 +71,15 @@ license: apache-2.0
71
  This model was converted to GGUF format from [`ValiantLabs/Qwen3-1.7B-ShiningValiant3`](https://huggingface.co/ValiantLabs/Qwen3-1.7B-ShiningValiant3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
72
  Refer to the [original model card](https://huggingface.co/ValiantLabs/Qwen3-1.7B-ShiningValiant3) for more details on the model.
73
 
 
 
 
 
 
 
 
 
 
74
  ## Use with llama.cpp
75
  Install llama.cpp through brew (works on Mac and Linux)
76
 
 
71
  This model was converted to GGUF format from [`ValiantLabs/Qwen3-1.7B-ShiningValiant3`](https://huggingface.co/ValiantLabs/Qwen3-1.7B-ShiningValiant3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
72
  Refer to the [original model card](https://huggingface.co/ValiantLabs/Qwen3-1.7B-ShiningValiant3) for more details on the model.
73
 
74
+ ---
75
+ Shining Valiant 3 is a science, AI design, and general reasoning specialist built on Qwen 3.
76
+
77
+ - Finetuned on our newest science reasoning data generated with Deepseek R1 0528!
78
+ - AI to build AI: our high-difficulty AI reasoning data makes Shining Valiant 3 your friend for building with current AI tech and discovering new innovations and improvements!
79
+ - Improved general and creative reasoning to supplement problem-solving and general chat performance.
80
+ - Small model sizes allow running on local desktop and mobile, plus super-fast server inference!
81
+
82
+ ---
83
  ## Use with llama.cpp
84
  Install llama.cpp through brew (works on Mac and Linux)
85