π§ OlympicCoder 7B Q6
Optimized and quantized version of OlympicCoder 7B designed for algorithmic reasoning, competitive programming, and symbolic inference.
π Model Details
- Model Name: OlympicCoder 7B Q6
- Quantization: Q6_K
- Format: GGUF
- Size: 6.25 GB
- Architecture: LLaMA-style 7B
- Base Model: open-r1_OlympicCoder-7B-GGUF
π οΈ Use Cases
- βοΈ Competitive programming and Codeforces-style tasks
- π Symbolic reasoning and algorithmic inference
- π» Code generation and technical prompts
π How to Run (with llama.cpp)
./main -m open-r1_OlympicCoder-7B-Q6_K.gguf -p "Write a function that checks if a number is prime."
Other tools:
- LM Studio: Import
.gguf
and chat directly - KoboldCpp / text-generation-webui: Load as GGUF model
π License
Apache 2.0 β free for commercial and research use.
- Downloads last month
- 3
Hardware compatibility
Log In
to view the estimation
6-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for sychonix/OlympicCoder-7B-Sychonix
Base model
Qwen/Qwen2.5-7B
Finetuned
Qwen/Qwen2.5-Coder-7B
Finetuned
Qwen/Qwen2.5-Coder-7B-Instruct
Finetuned
open-r1/OlympicCoder-7B
Quantized
bartowski/open-r1_OlympicCoder-7B-GGUF