Qwen2.5-14B-Instruct-1M GGUF Models

Choosing the Right Model Format

Selecting the correct model format depends on your hardware capabilities and memory constraints.

BF16 (Brain Float 16) – Use if BF16 acceleration is available

  • A 16-bit floating-point format designed for faster computation while retaining good precision.
  • Provides similar dynamic range as FP32 but with lower memory usage.
  • Recommended if your hardware supports BF16 acceleration (check your device’s specs).
  • Ideal for high-performance inference with reduced memory footprint compared to FP32.

πŸ“Œ Use BF16 if:
βœ” Your hardware has native BF16 support (e.g., newer GPUs, TPUs).
βœ” You want higher precision while saving memory.
βœ” You plan to requantize the model into another format.

πŸ“Œ Avoid BF16 if:
❌ Your hardware does not support BF16 (it may fall back to FP32 and run slower).
❌ You need compatibility with older devices that lack BF16 optimization.


F16 (Float 16) – More widely supported than BF16

  • A 16-bit floating-point high precision but with less of range of values than BF16.
  • Works on most devices with FP16 acceleration support (including many GPUs and some CPUs).
  • Slightly lower numerical precision than BF16 but generally sufficient for inference.

πŸ“Œ Use F16 if:
βœ” Your hardware supports FP16 but not BF16.
βœ” You need a balance between speed, memory usage, and accuracy.
βœ” You are running on a GPU or another device optimized for FP16 computations.

πŸ“Œ Avoid F16 if:
❌ Your device lacks native FP16 support (it may run slower than expected).
❌ You have memory limitations.


Quantized Models (Q4_K, Q6_K, Q8, etc.) – For CPU & Low-VRAM Inference

Quantization reduces model size and memory usage while maintaining as much accuracy as possible.

  • Lower-bit models (Q4_K) β†’ Best for minimal memory usage, may have lower precision.
  • Higher-bit models (Q6_K, Q8_0) β†’ Better accuracy, requires more memory.

πŸ“Œ Use Quantized Models if:
βœ” You are running inference on a CPU and need an optimized model.
βœ” Your device has low VRAM and cannot load full-precision models.
βœ” You want to reduce memory footprint while keeping reasonable accuracy.

πŸ“Œ Avoid Quantized Models if:
❌ You need maximum accuracy (full-precision models are better for this).
❌ Your hardware has enough VRAM for higher-precision formats (BF16/F16).


Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)

These models are optimized for extreme memory efficiency, making them ideal for low-power devices or large-scale deployments where memory is a critical constraint.

  • IQ3_XS: Ultra-low-bit quantization (3-bit) with extreme memory efficiency.

    • Use case: Best for ultra-low-memory devices where even Q4_K is too large.
    • Trade-off: Lower accuracy compared to higher-bit quantizations.
  • IQ3_S: Small block size for maximum memory efficiency.

    • Use case: Best for low-memory devices where IQ3_XS is too aggressive.
  • IQ3_M: Medium block size for better accuracy than IQ3_S.

    • Use case: Suitable for low-memory devices where IQ3_S is too limiting.
  • Q4_K: 4-bit quantization with block-wise optimization for better accuracy.

    • Use case: Best for low-memory devices where Q6_K is too large.
  • Q4_0: Pure 4-bit quantization, optimized for ARM devices.

    • Use case: Best for ARM-based devices or low-memory environments.

Summary Table: Model Format Selection

Model Format Precision Memory Usage Device Requirements Best Use Case
BF16 Highest High BF16-supported GPU/CPUs High-speed inference with reduced memory
F16 High High FP16-supported devices GPU inference when BF16 isn’t available
Q4_K Medium Low Low CPU or Low-VRAM devices Best for memory-constrained environments
Q6_K Medium Moderate CPU with more memory Better accuracy while still being quantized
Q8_0 High Moderate CPU or GPU with enough VRAM Best accuracy among quantized models
IQ3_XS Very Low Very Low Ultra-low-memory devices Extreme memory efficiency and low accuracy
Q4_0 Low Low ARM or low-memory devices llama.cpp can optimize for ARM devices

Included Files & Details

Qwen2.5-14B-Instruct-1M-bf16.gguf

  • Model weights preserved in BF16.
  • Use this if you want to requantize the model into a different format.
  • Best if your device supports BF16 acceleration.

Qwen2.5-14B-Instruct-1M-f16.gguf

  • Model weights stored in F16.
  • Use if your device supports FP16, especially if BF16 is not available.

Qwen2.5-14B-Instruct-1M-bf16-q8_0.gguf

  • Output & embeddings remain in BF16.
  • All other layers quantized to Q8_0.
  • Use if your device supports BF16 and you want a quantized version.

Qwen2.5-14B-Instruct-1M-f16-q8_0.gguf

  • Output & embeddings remain in F16.
  • All other layers quantized to Q8_0.

Qwen2.5-14B-Instruct-1M-q4_k.gguf

  • Output & embeddings quantized to Q8_0.
  • All other layers quantized to Q4_K.
  • Good for CPU inference with limited memory.

Qwen2.5-14B-Instruct-1M-q4_k_s.gguf

  • Smallest Q4_K variant, using less memory at the cost of accuracy.
  • Best for very low-memory setups.

Qwen2.5-14B-Instruct-1M-q6_k.gguf

  • Output & embeddings quantized to Q8_0.
  • All other layers quantized to Q6_K .

Qwen2.5-14B-Instruct-1M-q8_0.gguf

  • Fully Q8 quantized model for better accuracy.
  • Requires more memory but offers higher precision.

Qwen2.5-14B-Instruct-1M-iq3_xs.gguf

  • IQ3_XS quantization, optimized for extreme memory efficiency.
  • Best for ultra-low-memory devices.

Qwen2.5-14B-Instruct-1M-iq3_m.gguf

  • IQ3_M quantization, offering a medium block size for better accuracy.
  • Suitable for low-memory devices.

Qwen2.5-14B-Instruct-1M-q4_0.gguf

  • Pure Q4_0 quantization, optimized for ARM devices.
  • Best for low-memory environments.
  • Prefer IQ4_NL for better accuracy.

πŸš€ If you find these models useful

Please click like ❀ . Also I’d really appreciate it if you could test my Network Monitor Assistant at πŸ‘‰ Network Monitor Assitant.

πŸ’¬ Click the chat icon (bottom right of the main and dashboard pages) . Choose a LLM; toggle between the LLM Types TurboLLM -> FreeLLM -> TestLLM.

What I'm Testing

I'm experimenting with function calling against my network monitoring service. Using small open source models. I am into the question "How small can it go and still function".

🟑 TestLLM – Runs the current testing model using llama.cpp on 6 threads of a Cpu VM (Should take about 15s to load. Inference speed is quite slow and it only processes one user prompt at a timeβ€”still working on scaling!). If you're curious, I'd be happy to share how it works! .

The other Available AI Assistants

🟒 TurboLLM – Uses gpt-4o-mini Fast! . Note: tokens are limited since OpenAI models are pricey, but you can Login or Download the Free Network Monitor agent to get more tokens, Alternatively use the TestLLM .

πŸ”΅ HugLLM – Runs open-source Hugging Face models Fast, Runs small models (β‰ˆ8B) hence lower quality, Get 2x more tokens (subject to Hugging Face API availability)

Qwen2.5-14B-Instruct-1M

Chat

Introduction

Qwen2.5-1M is the long-context version of the Qwen2.5 series models, supporting a context length of up to 1M tokens. Compared to the Qwen2.5 128K version, Qwen2.5-1M demonstrates significantly improved performance in handling long-context tasks while maintaining its capability in short tasks.

The model has the following features:

  • Type: Causal Language Models
  • Training Stage: Pretraining & Post-training
  • Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
  • Number of Parameters: 14.7B
  • Number of Paramaters (Non-Embedding): 13.1B
  • Number of Layers: 48
  • Number of Attention Heads (GQA): 40 for Q and 8 for KV
  • Context Length: Full 1,010,000 tokens and generation 8192 tokens
    • We recommend deploying with our custom vLLM, which introduces sparse attention and length extrapolation methods to ensure efficiency and accuracy for long-context tasks. For specific guidance, refer to this section.
    • You can also use the previous framework that supports Qwen2.5 for inference, but accuracy degradation may occur for sequences exceeding 262,144 tokens.

For more details, please refer to our blog, GitHub, Technical Report, and Documentation.

Requirements

The code of Qwen2.5 has been in the latest Hugging face transformers and we advise you to use the latest version of transformers.

With transformers<4.37.0, you will encounter the following error:

KeyError: 'qwen2'

Quickstart

Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Qwen/Qwen2.5-14B-Instruct-1M"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Give me a short introduction to large language model."
messages = [
    {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

Processing Ultra Long Texts

To enhance processing accuracy and efficiency for long sequences, we have developed an advanced inference framework based on vLLM, incorporating sparse attention and length extrapolation. This approach significantly improves model generation performance for sequences exceeding 256K tokens and achieves a 3 to 7 times speedup for sequences up to 1M tokens.

Here we provide step-by-step instructions for deploying the Qwen2.5-1M models with our framework.

1. System Preparation

To achieve the best performance, we recommend using GPUs with Ampere or Hopper architecture, which support optimized kernels.

Ensure your system meets the following requirements:

  • CUDA Version: 12.1 or 12.3
  • Python Version: >=3.9 and <=3.12

VRAM Requirements:

  • For processing 1 million-token sequences:
    • Qwen2.5-7B-Instruct-1M: At least 120GB VRAM (total across GPUs).
    • Qwen2.5-14B-Instruct-1M: At least 320GB VRAM (total across GPUs).

If your GPUs do not have sufficient VRAM, you can still use Qwen2.5-1M for shorter tasks.

2. Install Dependencies

For now, you need to clone the vLLM repository from our custom branch and install it manually. We are working on getting our branch merged into the main vLLM project.

git clone -b dev/dual-chunk-attn [email protected]:QwenLM/vllm.git
cd vllm
pip install -e . -v

3. Launch vLLM

vLLM supports offline inference or launch an openai-like server.

Example of Offline Inference

from transformers import AutoTokenizer
from vllm import LLM, SamplingParams

# Initialize the tokenizer
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-14B-Instruct-1M")

# Pass the default decoding hyperparameters of Qwen2.5-14B-Instruct
# max_tokens is for the maximum length for generation.
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=512)

# Input the model name or path. See below for parameter explanation (after the example of openai-like server).
llm = LLM(model="Qwen/Qwen2.5-14B-Instruct-1M",
    tensor_parallel_size=4,
    max_model_len=1010000,
    enable_chunked_prefill=True,
    max_num_batched_tokens=131072,
    enforce_eager=True,
    # quantization="fp8", # Enabling FP8 quantization for model weights can reduce memory usage.
)

# Prepare your prompts
prompt = "Tell me something about large language models."
messages = [
    {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

# generate outputs
outputs = llm.generate([text], sampling_params)

# Print the outputs.
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")

Example of Openai-like Server

vllm serve Qwen/Qwen2.5-14B-Instruct-1M \
  --tensor-parallel-size 4 \
  --max-model-len 1010000 \
  --enable-chunked-prefill --max-num-batched-tokens 131072 \
  --enforce-eager \
  --max-num-seqs 1

# --quantization fp8  # Enabling FP8 quantization for model weights can reduce memory usage.

Then you can use curl or python to interact with the deployed model.

Parameter Explanations:

  • --tensor-parallel-size

    • Set to the number of GPUs you are using. Max 4 GPUs for the 7B model, and 8 GPUs for the 14B model.
  • --max-model-len

    • Defines the maximum input sequence length. Reduce this value if you encounter Out of Memory issues.
  • --max-num-batched-tokens

    • Sets the chunk size in Chunked Prefill. A smaller value reduces activation memory usage but may slow down inference.
    • Recommend 131072 for optimal performance.
  • --max-num-seqs

    • Limits concurrent sequences processed.

You can also refer to our Documentation for usage of vLLM.

Troubleshooting:

  1. Encountering the error: "The model's max sequence length (xxxxx) is larger than the maximum number of tokens that can be stored in the KV cache."

    The VRAM reserved for the KV cache is insufficient. Consider reducing the max_model_len or increasing the tensor_parallel_size. Alternatively, you can reduce max_num_batched_tokens, although this may significantly slow down inference.

  2. Encountering the error: "torch.OutOfMemoryError: CUDA out of memory."

    The VRAM reserved for activation weights is insufficient. You can try setting gpu_memory_utilization to 0.85 or lower, but be aware that this might reduce the VRAM available for the KV cache.

  3. Encountering the error: "Input prompt (xxxxx tokens) + lookahead slots (0) is too long and exceeds the capacity of the block manager."

    The input is too lengthy. Consider using a shorter sequence or increasing the max_model_len.

Evaluation & Performance

Detailed evaluation results are reported in this πŸ“‘ blog and our technical report.

Citation

If you find our work helpful, feel free to give us a cite.

@misc{qwen2.5-1m,
    title = {Qwen2.5-1M: Deploy Your Own Qwen with Context Length up to 1M Tokens},
    url = {https://qwenlm.github.io/blog/qwen2.5-1m/},
    author = {Qwen Team},
    month = {January},
    year = {2025}
}

@article{qwen2.5,
      title={Qwen2.5-1M Technical Report}, 
      author={An Yang and Bowen Yu and Chengyuan Li and Dayiheng Liu and Fei Huang and Haoyan Huang and Jiandong Jiang and Jianhong Tu and Jianwei Zhang and Jingren Zhou and Junyang Lin and Kai Dang and Kexin Yang and Le Yu and Mei Li and Minmin Sun and Qin Zhu and Rui Men and Tao He and Weijia Xu and Wenbiao Yin and Wenyuan Yu and Xiafei Qiu and Xingzhang Ren and Xinlong Yang and Yong Li and Zhiying Xu and Zipeng Zhang},
      journal={arXiv preprint arXiv:2501.15383},
      year={2025}
}
Downloads last month
1,942
GGUF
Model size
14.8B params
Architecture
qwen2

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Mungert/Qwen2.5-14B-Instruct-1M-GGUF

Base model

Qwen/Qwen2.5-14B
Quantized
(85)
this model