🧐 OlympicCoder 7B Q6

Optimized and quantized version of OlympicCoder 7B designed for algorithmic reasoning, competitive programming, and symbolic inference.


πŸ“Š Model Details

  • Model Name: OlympicCoder 7B Q6
  • Quantization: Q6_K
  • Format: GGUF
  • Size: 6.25 GB
  • Architecture: LLaMA-style 7B
  • Base Model: open-r1_OlympicCoder-7B-GGUF

πŸ› οΈ Use Cases

  • βš–οΈ Competitive programming and Codeforces-style tasks
  • πŸ“ˆ Symbolic reasoning and algorithmic inference
  • πŸ’» Code generation and technical prompts

πŸš€ How to Run (with llama.cpp)

./main -m open-r1_OlympicCoder-7B-Q6_K.gguf -p "Write a function that checks if a number is prime."

Other tools:

  • LM Studio: Import .gguf and chat directly
  • KoboldCpp / text-generation-webui: Load as GGUF model

πŸ“„ License

Apache 2.0 β€” free for commercial and research use.


Downloads last month
3
GGUF
Model size
7.62B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for sychonix/OlympicCoder-7B-Sychonix