--- base_model: bunnycore/Qwen-2.5-7B-1M-RRP-v1-lora tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - llama-cpp - gguf-my-lora license: apache-2.0 language: - en datasets: - AMead10/Sky-T1_data_17k_sharegpt - Chaser-cz/sonnet35-charcard-roleplay-sharegpt --- # wqerrewetw/Qwen-2.5-7B-1M-RRP-v1-lora-F16-GGUF This LoRA adapter was converted to GGUF format from [`bunnycore/Qwen-2.5-7B-1M-RRP-v1-lora`](https://huggingface.co/bunnycore/Qwen-2.5-7B-1M-RRP-v1-lora) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space. Refer to the [original adapter repository](https://huggingface.co/bunnycore/Qwen-2.5-7B-1M-RRP-v1-lora) for more details. ## Use with llama.cpp ```bash # with cli llama-cli -m base_model.gguf --lora Qwen-2.5-7B-1M-RRP-v1-lora-f16.gguf (...other args) # with server llama-server -m base_model.gguf --lora Qwen-2.5-7B-1M-RRP-v1-lora-f16.gguf (...other args) ``` To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).