Text Generation
Transformers
GGUF
English
shining-valiant
shining-valiant-3
valiant
valiant-labs
qwen
qwen-3
qwen-3-1.7b
1.7b
reasoning
code
code-reasoning
science
science-reasoning
physics
biology
chemistry
earth-science
astronomy
machine-learning
artificial-intelligence
compsci
computer-science
information-theory
ML-Ops
math
cuda
deep-learning
agentic
LLM
neuromorphic
self-improvement
complex-systems
cognition
linguistics
philosophy
logic
epistemology
simulation
game-theory
knowledge-management
creativity
problem-solving
architect
engineer
developer
creative
analytical
expert
rationality
conversational
chat
instruct
llama-cpp
gguf-my-repo
Update README.md
Browse files
README.md
CHANGED
@@ -71,6 +71,15 @@ license: apache-2.0
|
|
71 |
This model was converted to GGUF format from [`ValiantLabs/Qwen3-1.7B-ShiningValiant3`](https://huggingface.co/ValiantLabs/Qwen3-1.7B-ShiningValiant3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
72 |
Refer to the [original model card](https://huggingface.co/ValiantLabs/Qwen3-1.7B-ShiningValiant3) for more details on the model.
|
73 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
74 |
## Use with llama.cpp
|
75 |
Install llama.cpp through brew (works on Mac and Linux)
|
76 |
|
|
|
71 |
This model was converted to GGUF format from [`ValiantLabs/Qwen3-1.7B-ShiningValiant3`](https://huggingface.co/ValiantLabs/Qwen3-1.7B-ShiningValiant3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
72 |
Refer to the [original model card](https://huggingface.co/ValiantLabs/Qwen3-1.7B-ShiningValiant3) for more details on the model.
|
73 |
|
74 |
+
---
|
75 |
+
Shining Valiant 3 is a science, AI design, and general reasoning specialist built on Qwen 3.
|
76 |
+
|
77 |
+
- Finetuned on our newest science reasoning data generated with Deepseek R1 0528!
|
78 |
+
- AI to build AI: our high-difficulty AI reasoning data makes Shining Valiant 3 your friend for building with current AI tech and discovering new innovations and improvements!
|
79 |
+
- Improved general and creative reasoning to supplement problem-solving and general chat performance.
|
80 |
+
- Small model sizes allow running on local desktop and mobile, plus super-fast server inference!
|
81 |
+
|
82 |
+
---
|
83 |
## Use with llama.cpp
|
84 |
Install llama.cpp through brew (works on Mac and Linux)
|
85 |
|