
luckeciano/Qwen-2.5-7B-RL-LACPO-BaselineNoKLNoEntropyNoSmoothSoftLabelNormAdv - GGUF
This repo contains GGUF format model files for luckeciano/Qwen-2.5-7B-RL-LACPO-BaselineNoKLNoEntropyNoSmoothSoftLabelNormAdv.
The files were quantized using machines provided by TensorBlock, and they are compatible with llama.cpp as of commit b5753.
Our projects
Forge | |
---|---|
![]() |
|
An OpenAI-compatible multi-provider routing layer. | |
π Try it now! π | |
Awesome MCP Servers | TensorBlock Studio |
![]() |
![]() |
A comprehensive collection of Model Context Protocol (MCP) servers. | A lightweight, open, and extensible multi-LLM interaction studio. |
π See what we built π | π See what we built π |
Prompt template
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
Model file specification
Downloading instruction
Command line
Firstly, install Huggingface Client
pip install -U "huggingface_hub[cli]"
Then, downoad the individual model file the a local directory
huggingface-cli download tensorblock/luckeciano_Qwen-2.5-7B-RL-LACPO-BaselineNoKLNoEntropyNoSmoothSoftLabelNormAdv-GGUF --include "Qwen-2.5-7B-RL-LACPO-BaselineNoKLNoEntropyNoSmoothSoftLabelNormAdv-Q2_K.gguf" --local-dir MY_LOCAL_DIR
If you wanna download multiple model files with a pattern (e.g., *Q4_K*gguf
), you can try:
huggingface-cli download tensorblock/luckeciano_Qwen-2.5-7B-RL-LACPO-BaselineNoKLNoEntropyNoSmoothSoftLabelNormAdv-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
- Downloads last month
- 40
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for tensorblock/luckeciano_Qwen-2.5-7B-RL-LACPO-BaselineNoKLNoEntropyNoSmoothSoftLabelNormAdv-GGUF
Base model
Qwen/Qwen2.5-7B
Finetuned
Qwen/Qwen2.5-Math-7B