granite-3.3-8b-instruct GGUF Models

Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)

Our latest quantization method introduces precision-adaptive quantization for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on Llama-3-8B. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency.

Benchmark Context

All tests conducted on Llama-3-8B-Instruct using:

  • Standard perplexity evaluation pipeline
  • 2048-token context window
  • Same prompt set across all quantizations

Method

  • Dynamic Precision Allocation:
    • First/Last 25% of layers β†’ IQ4_XS (selected layers)
    • Middle 50% β†’ IQ2_XXS/IQ3_S (increase efficiency)
  • Critical Component Protection:
    • Embeddings/output layers use Q5_K
    • Reduces error propagation by 38% vs standard 1-2bit

Quantization Performance Comparison (Llama-3-8B)

Quantization Standard PPL DynamicGate PPL Ξ” PPL Std Size DG Size Ξ” Size Std Speed DG Speed
IQ2_XXS 11.30 9.84 -12.9% 2.5G 2.6G +0.1G 234s 246s
IQ2_XS 11.72 11.63 -0.8% 2.7G 2.8G +0.1G 242s 246s
IQ2_S 14.31 9.02 -36.9% 2.7G 2.9G +0.2G 238s 244s
IQ1_M 27.46 15.41 -43.9% 2.2G 2.5G +0.3G 206s 212s
IQ1_S 53.07 32.00 -39.7% 2.1G 2.4G +0.3G 184s 209s

Key:

  • PPL = Perplexity (lower is better)
  • Ξ” PPL = Percentage change from standard to DynamicGate
  • Speed = Inference time (CPU avx2, 2048 token context)
  • Size differences reflect mixed quantization overhead

Key Improvements:

  • πŸ”₯ IQ1_M shows massive 43.9% perplexity reduction (27.46 β†’ 15.41)
  • πŸš€ IQ2_S cuts perplexity by 36.9% while adding only 0.2GB
  • ⚑ IQ1_S maintains 39.7% better accuracy despite 1-bit quantization

Tradeoffs:

  • All variants have modest size increases (0.1-0.3GB)
  • Inference speeds remain comparable (<5% difference)

When to Use These Models

πŸ“Œ Fitting models into GPU VRAM

βœ” Memory-constrained deployments

βœ” Cpu and Edge Devices where 1-2bit errors can be tolerated

βœ” Research into ultra-low-bit quantization

Choosing the Right Model Format

Selecting the correct model format depends on your hardware capabilities and memory constraints.

BF16 (Brain Float 16) – Use if BF16 acceleration is available

  • A 16-bit floating-point format designed for faster computation while retaining good precision.
  • Provides similar dynamic range as FP32 but with lower memory usage.
  • Recommended if your hardware supports BF16 acceleration (check your device's specs).
  • Ideal for high-performance inference with reduced memory footprint compared to FP32.

πŸ“Œ Use BF16 if:
βœ” Your hardware has native BF16 support (e.g., newer GPUs, TPUs).
βœ” You want higher precision while saving memory.
βœ” You plan to requantize the model into another format.

πŸ“Œ Avoid BF16 if:
❌ Your hardware does not support BF16 (it may fall back to FP32 and run slower).
❌ You need compatibility with older devices that lack BF16 optimization.


F16 (Float 16) – More widely supported than BF16

  • A 16-bit floating-point high precision but with less of range of values than BF16.
  • Works on most devices with FP16 acceleration support (including many GPUs and some CPUs).
  • Slightly lower numerical precision than BF16 but generally sufficient for inference.

πŸ“Œ Use F16 if:
βœ” Your hardware supports FP16 but not BF16.
βœ” You need a balance between speed, memory usage, and accuracy.
βœ” You are running on a GPU or another device optimized for FP16 computations.

πŸ“Œ Avoid F16 if:
❌ Your device lacks native FP16 support (it may run slower than expected).
❌ You have memory limitations.


Quantized Models (Q4_K, Q6_K, Q8, etc.) – For CPU & Low-VRAM Inference

Quantization reduces model size and memory usage while maintaining as much accuracy as possible.

  • Lower-bit models (Q4_K) β†’ Best for minimal memory usage, may have lower precision.
  • Higher-bit models (Q6_K, Q8_0) β†’ Better accuracy, requires more memory.

πŸ“Œ Use Quantized Models if:
βœ” You are running inference on a CPU and need an optimized model.
βœ” Your device has low VRAM and cannot load full-precision models.
βœ” You want to reduce memory footprint while keeping reasonable accuracy.

πŸ“Œ Avoid Quantized Models if:
❌ You need maximum accuracy (full-precision models are better for this).
❌ Your hardware has enough VRAM for higher-precision formats (BF16/F16).


Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)

These models are optimized for extreme memory efficiency, making them ideal for low-power devices or large-scale deployments where memory is a critical constraint.

  • IQ3_XS: Ultra-low-bit quantization (3-bit) with extreme memory efficiency.

    • Use case: Best for ultra-low-memory devices where even Q4_K is too large.
    • Trade-off: Lower accuracy compared to higher-bit quantizations.
  • IQ3_S: Small block size for maximum memory efficiency.

    • Use case: Best for low-memory devices where IQ3_XS is too aggressive.
  • IQ3_M: Medium block size for better accuracy than IQ3_S.

    • Use case: Suitable for low-memory devices where IQ3_S is too limiting.
  • Q4_K: 4-bit quantization with block-wise optimization for better accuracy.

    • Use case: Best for low-memory devices where Q6_K is too large.
  • Q4_0: Pure 4-bit quantization, optimized for ARM devices.

    • Use case: Best for ARM-based devices or low-memory environments.

Summary Table: Model Format Selection

Model Format Precision Memory Usage Device Requirements Best Use Case
BF16 Highest High BF16-supported GPU/CPUs High-speed inference with reduced memory
F16 High High FP16-supported devices GPU inference when BF16 isn't available
Q4_K Medium Low Low CPU or Low-VRAM devices Best for memory-constrained environments
Q6_K Medium Moderate CPU with more memory Better accuracy while still being quantized
Q8_0 High Moderate CPU or GPU with enough VRAM Best accuracy among quantized models
IQ3_XS Very Low Very Low Ultra-low-memory devices Extreme memory efficiency and low accuracy
Q4_0 Low Low ARM or low-memory devices llama.cpp can optimize for ARM devices

Included Files & Details

granite-3.3-8b-instruct-bf16.gguf

  • Model weights preserved in BF16.
  • Use this if you want to requantize the model into a different format.
  • Best if your device supports BF16 acceleration.

granite-3.3-8b-instruct-f16.gguf

  • Model weights stored in F16.
  • Use if your device supports FP16, especially if BF16 is not available.

granite-3.3-8b-instruct-bf16-q8_0.gguf

  • Output & embeddings remain in BF16.
  • All other layers quantized to Q8_0.
  • Use if your device supports BF16 and you want a quantized version.

granite-3.3-8b-instruct-f16-q8_0.gguf

  • Output & embeddings remain in F16.
  • All other layers quantized to Q8_0.

granite-3.3-8b-instruct-q4_k.gguf

  • Output & embeddings quantized to Q8_0.
  • All other layers quantized to Q4_K.
  • Good for CPU inference with limited memory.

granite-3.3-8b-instruct-q4_k_s.gguf

  • Smallest Q4_K variant, using less memory at the cost of accuracy.
  • Best for very low-memory setups.

granite-3.3-8b-instruct-q6_k.gguf

  • Output & embeddings quantized to Q8_0.
  • All other layers quantized to Q6_K .

granite-3.3-8b-instruct-q8_0.gguf

  • Fully Q8 quantized model for better accuracy.
  • Requires more memory but offers higher precision.

granite-3.3-8b-instruct-iq3_xs.gguf

  • IQ3_XS quantization, optimized for extreme memory efficiency.
  • Best for ultra-low-memory devices.

granite-3.3-8b-instruct-iq3_m.gguf

  • IQ3_M quantization, offering a medium block size for better accuracy.
  • Suitable for low-memory devices.

granite-3.3-8b-instruct-q4_0.gguf

  • Pure Q4_0 quantization, optimized for ARM devices.
  • Best for low-memory environments.
  • Prefer IQ4_NL for better accuracy.

πŸš€ If you find these models useful

❀ Please click "Like" if you find this useful!
Help me test my AI-Powered Network Monitor Assistant with quantum-ready security checks:
πŸ‘‰ Free Network Monitor

πŸ’¬ How to test:

  1. Click the chat icon (bottom right on any page)
  2. Choose an AI assistant type:
    • TurboLLM (GPT-4-mini)
    • FreeLLM (Open-source)
    • TestLLM (Experimental CPU-only)

What I’m Testing

I’m pushing the limits of small open-source models for AI network monitoring, specifically:

  • Function calling against live network services
  • How small can a model go while still handling:
    • Automated Nmap scans
    • Quantum-readiness checks
    • Metasploit integration

🟑 TestLLM – Current experimental model (llama.cpp on 6 CPU threads):

  • βœ… Zero-configuration setup
  • ⏳ 30s load time (slow inference but no API costs)
  • πŸ”§ Help wanted! If you’re into edge-device AI, let’s collaborate!

Other Assistants

🟒 TurboLLM – Uses gpt-4-mini for:

πŸ”΅ HugLLM – Open-source models (β‰ˆ8B params):

  • 2x more tokens than TurboLLM
  • AI-powered log analysis
  • 🌐 Runs on Hugging Face Inference API

πŸ’‘ Example AI Commands to Test:

  1. "Give me info on my websites SSL certificate"
  2. "Check if my server is using quantum safe encyption for communication"
  3. "Run a quick Nmap vulnerability test"

Granite-3.3-8B-Instruct

Model Summary: Granite-3.3-8B-Instruct is a 8-billion parameter 128K context length language model fine-tuned for improved reasoning and instruction-following capabilities. Built on top of Granite-3.3-8B-Base, the model delivers significant gains on benchmarks for measuring generic performance including AlpacaEval-2.0 and Arena-Hard, and improvements in mathematics, coding, and instruction following. It supprts structured reasoning through <think></think> and <response></response> tags, providing clear separation between internal thoughts and final outputs. The model has been trained on a carefully balanced combination of permissively licensed data and curated synthetic tasks.

Supported Languages: English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may finetune this Granite model for languages beyond these 12 languages.

Intended Use: This model is designed to handle general instruction-following tasks and can be integrated into AI assistants across various domains, including business applications.

Capabilities

  • Thinking
  • Summarization
  • Text classification
  • Text extraction
  • Question-answering
  • Retrieval Augmented Generation (RAG)
  • Code related tasks
  • Function-calling tasks
  • Multilingual dialog use cases
  • Fill-in-the-middle
  • Long-context tasks including long document/meeting summarization, long document QA, etc.

Generation: This is a simple example of how to use Granite-3.3-8B-Instruct model.

Install the following libraries:

pip install torch torchvision torchaudio
pip install accelerate
pip install transformers

Then, copy the snippet from the section that is relevant for your use case.

from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
import torch

model_path="ibm-granite/granite-3.3-8b-instruct"
device="cuda"
model = AutoModelForCausalLM.from_pretrained(
        model_path,
        device_map=device,
        torch_dtype=torch.bfloat16,
    )
tokenizer = AutoTokenizer.from_pretrained(
        model_path
)

conv = [{"role": "user", "content":"Redesign a common household item to make it more sustainable and user-friendly. Explain the changes and their benefits."}]

input_ids = tokenizer.apply_chat_template(conv, return_tensors="pt", thinking=True, return_dict=True, add_generation_prompt=True).to(device)

set_seed(42)
output = model.generate(
    **input_ids,
    max_new_tokens=8192,
)

prediction = tokenizer.decode(output[0, input_ids["input_ids"].shape[1]:], skip_special_tokens=True)
print(prediction)

Example Outputs

  • thinking=True
<think>To address this prompt, I'll focus on redesigning a common household item: the plastic water bottle. The goal is to make it more sustainable and user-friendly. Here are the proposed changes and their benefits:

1. **Material**: Replace plastic with biodegradable or recyclable materials like glass or plant-based plastics (PLA).
2. **Design**: Introduce a modular design for easy refilling and cleaning.
3. **Functionality**: Incorporate a built-in water filter for purification.
4. **Durability**: Enhance the bottle's lifespan through robust construction.
5. **User-friendliness**: Simplify the cap mechanism for easy opening and closing.

These changes would significantly reduce plastic waste, promote recycling, ensure access to clean drinking water, and enhance the overall user experience.

Now, I will detail each change and its benefits in a structured format.</think>

<response>### Redesign of the Plastic Water Bottle for Sustainability and User-Friendliness

#### 1. Material: Biodegradable or Recyclable Alternatives

**Change**: Transition from traditional PET plastic to glass or plant-based plastics (Polylactic Acid - PLA).

**Benefits**:
- **Sustainability**: Glass and PLA are biodegradable or highly recyclable, drastically reducing plastic pollution.
- **Health**: Unlike PET, these materials do not leach chemicals into the water, ensuring safer consumption.

#### 2. Modular Design for Easy Refilling and Cleaning

**Change**: Implement a detachable, interchangeable inner liner within a reusable outer casing.

**Benefits**:
- **User-Friendliness**: Users can easily remove the inner liner for thorough cleaning, reducing the risk of bacterial growth.
- **Sustainability**: The outer casing remains intact, encouraging reuse, while the inner liner can be replaced or recycled when worn out.

#### 3. Built-in Water Filter

**Change**: Integrate a removable, replaceable water filter into the bottle cap.

**Benefits**:
- **Accessibility**: Provides clean drinking water directly from any source (tap, river, etc.), eliminating the need for additional filtration devices.
- **Health**: Removes contaminants, bacteria, and impurities, ensuring safe hydration.
- **Sustainability**: Reduces the use of single-use plastic water bottles by offering a purification solution within a reusable container.

#### 4. Enhanced Durability

**Change**: Use high-quality, durable materials and robust construction techniques for both the outer casing and inner liner.

**Benefits**:
- **Longevity**: A more robust design increases the bottle's lifespan, reducing the frequency of replacements.
- **Cost-effectiveness**: Although the initial cost might be higher, the long-term savings from reduced replacements make it economical.
- **Sustainability**: Less frequent disposal means lower environmental impact.

#### 5. Simplified Cap Mechanism

**Change**: Adopt a one-touch, easy-to-open and close cap design, possibly inspired by sports bottle caps.

**Benefits**:
- **User-Friendliness**: Especially beneficial for children, elderly, or those with limited hand dexterity.
- **Convenience**: Quick access to water encourages more frequent hydration, promoting healthier habits.

### Conclusion

This redesigned water bottle addresses key sustainability concerns by minimizing plastic use and waste, while also enhancing user-friendliness through improved functionality, durability, and accessibility. By integrating biodegradable materials, a modular design, a built-in filter, robust construction, and a simple cap mechanism, the bottle not only reduces environmental footprint but also offers a superior user experience.</response>
  • thinking=False
Item for Redesign: Plastic Bottle (e.g., water or beverage bottle)

Redesign Concept: Eco-Friendly, Modular, and Multi-Functional Reusable Bottle

Changes:

1. Material: Replace the conventional plastic with food-grade, BPA-free, and recyclable materials such as stainless steel or glass. These materials are durable, non-toxic, and have a longer lifespan, reducing waste.

2. Modular Design: Introduce a modular design that allows users to customize their bottle according to their needs. This could include interchangeable caps, mouthpiece sizes, and carrying loops. Users can purchase additional modules as needed, promoting a more sustainable approach by reducing the need to buy an entirely new bottle for different purposes.

3. Integrated Filter: Incorporate a built-in, washable, and reusable filter that can remove impurities and improve the taste of water. This eliminates the need for single-use disposable filters or bottled water, further reducing plastic waste.

4. Smart Cap: Develop a smart cap with a built-in digital display and temperature sensor. This feature allows users to track their daily water intake, set hydration goals, and monitor the temperature of their beverage. The smart cap can be synced with a mobile app for additional functionality, such as reminders and progress tracking.

5. Easy-to-Clean Design: Ensure the bottle has a wide mouth and smooth interior surfaces for easy cleaning. Include a brush for hard-to-reach areas, making maintenance simple and encouraging regular use.

6. Collapsible Structure: Implement a collapsible design that reduces the bottle's volume when not in use, making it more portable and convenient for storage.

Benefits:

1. Sustainability: By using recyclable materials and reducing plastic waste, this redesigned bottle significantly contributes to a more sustainable lifestyle. The modular design and reusable filter also minimize single-use plastic consumption.

2. User-Friendly: The smart cap, easy-to-clean design, and collapsible structure make the bottle convenient and user-friendly. Users can customize their bottle to suit their needs, ensuring a better overall experience.

3. Healthier Option: Using food-grade, BPA-free materials and an integrated filter ensures that the beverages consumed are free from harmful chemicals and impurities, promoting a healthier lifestyle.

4. Cost-Effective: Although the initial investment might be higher, the long-term savings from reduced purchases of single-use plastic bottles and disposable filters make this reusable bottle a cost-effective choice.

5. Encourages Hydration: The smart cap's features, such as hydration tracking and temperature monitoring, can motivate users to stay hydrated and develop healthier habits.

By redesigning a common household item like the plastic bottle, we can create a more sustainable, user-friendly, and health-conscious alternative that benefits both individuals and the environment.

Evaluation Results:

Comparison with different models over various benchmarks1. Scores of AlpacaEval-2.0 and Arena-Hard are calculated with thinking=True
Models Arena-Hard AlpacaEval-2.0 MMLU PopQA TruthfulQA BigBenchHard2 DROP3 GSM8K HumanEval HumanEval+ IFEval AttaQ
Granite-3.1-2B-Instruct 23.3 27.17 57.11 20.55 59.79 61.82 20.99 67.55 79.45 75.26 63.59 84.7
Granite-3.2-2B-Instruct 24.86 34.51 57.18 20.56 59.8 61.39 23.84 67.02 80.13 73.39 61.55 83.23
Granite-3.3-2B-Instruct 28.86 43.45 55.88 18.4 58.97 63.91 44.33 72.48 80.51 75.68 65.8 87.47
Llama-3.1-8B-Instruct 36.43 27.22 69.15 28.79 52.79 73.43 71.23 83.24 85.32 80.15 79.10 83.43
DeepSeek-R1-Distill-Llama-8B 17.17 21.85 45.80 13.25 47.43 67.39 49.73 72.18 67.54 62.91 66.50 42.87
Qwen-2.5-7B-Instruct 25.44 30.34 74.30 18.12 63.06 69.19 64.06 84.46 93.35 89.91 74.90 81.90
DeepSeek-R1-Distill-Qwen-7B 10.36 15.35 50.72 9.94 47.14 67.38 51.78 78.47 79.89 78.43 59.10 42.45
Granite-3.1-8B-Instruct 37.58 30.34 66.77 28.7 65.84 69.87 58.57 79.15 89.63 85.79 73.20 85.73
Granite-3.2-8B-Instruct 55.25 61.19 66.79 28.04 66.92 71.86 58.29 81.65 89.35 85.72 74.31 84.7
Granite-3.3-8B-Instruct 57.56 62.68 65.54 26.17 66.86 69.13 59.36 80.89 89.73 86.09 74.82 88.5
Math Benchmarks
Models AIME24 MATH-500
Granite-3.1-2B-Instruct 0.89 35.07
Granite-3.2-2B-Instruct 0.89 35.54
Granite-3.3-2B-Instruct 3.28 58.09
Granite-3.1-8B-Instruct 1.97 48.73
Granite-3.2-8B-Instruct 2.43 52.8
Granite-3.3-8B-Instruct 8.12 69.02

Training Data: Overall, our training data is largely comprised of two key sources: (1) publicly available datasets with permissive license, (2) internal synthetically generated data targeted to enhance reasoning capabilites.

Infrastructure: We train Granite-3.3-8B-Instruct using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.

Ethical Considerations and Limitations: Granite-3.3-8B-Instruct builds upon Granite-3.3-8B-Base, leveraging both permissively licensed open-source and select proprietary data for enhanced performance. Since it inherits its foundation from the previous model, all ethical considerations and limitations applicable to Granite-3.3-8B-Base remain relevant.

Resources

[1] Evaluated using OLMES (except AttaQ and Arena-Hard scores)

[2] Added regex for more efficient asnwer extraction.

[3] Modified the implementation to handle some of the issues mentioned here

Downloads last month
863
GGUF
Model size
8.17B params
Architecture
granite
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Mungert/granite-3.3-8b-instruct-GGUF

Quantized
(5)
this model