Transformers
GGUF
falcon-h1
drawing

Table of Contents

  1. TL;DR
  2. Model Details
  3. Training Details
  4. Usage
  5. Evaluation
  6. Citation

TL;DR

Model Details

Model Description

  • Developed by: https://www.tii.ae
  • Model type: Causal decoder-only
  • Architecture: Hybrid Transformers + Mamba architecture
  • Language(s) (NLP): English, Multilingual
  • License: Falcon-LLM License

Training details

For more details about the training protocol of this model, please refer to the Falcon-H1 technical blogpost.

Usage

Currently to use this model you can either rely on Hugging Face transformers, vLLM or our custom fork of llama.cpp library.

Inference

Make sure to install the latest version of transformers or vllm, eventually install these packages from source:

pip install git+https://github.com/huggingface/transformers.git

Refer to the official vLLM documentation for more details on building vLLM from source.

πŸ€— transformers

Refer to the snippet below to run H1 models using πŸ€— transformers:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "tiiuae/Falcon-H1-1B-Base"

model = AutoModelForCausalLM.from_pretrained(
  model_id,
  torch_dtype=torch.bfloat16,
  device_map="auto"
)

# Perform text generation

vLLM

For vLLM, simply start a server by executing the command below:

# pip install vllm
vllm serve tiiuae/Falcon-H1-1B-Instruct --tensor-parallel-size 2 --data-parallel-size 1

πŸ¦™ llama.cpp

While we are working on integrating our architecture directly into llama.cpp library, you can install our fork of the library and use it directly: https://github.com/tiiuae/llama.cpp-Falcon-H1 Use the same installing guidelines as llama.cpp.

Evaluation

Falcon-H1 series perform very well on a variety of tasks, including reasoning tasks.

Tasks Falcon-H1-1.5B Qwen3-1.7B Qwen2.5-1.5B Gemma3-1B Llama3.2-1B Falcon3-1B
General
BBH 46.47 35.18 42.41 35.86 33.21 34.47
ARC-C 42.06 34.81 40.53 34.13 34.64 43.09
TruthfulQA 45.98 49.39 47.05 42.17 42.08 42.31
HellaSwag 63.33 49.27 62.23 42.24 55.3 58.53
MMLU 62.03 57.04 59.76 40.87 45.93 46.1
Math
GSM8k 74.98 69.83 57.47 42.38 44.28 44.05
MATH-500 74.0 73.0 48.4 45.4 13.2 19.8
AMC-23 43.59 46.09 24.06 19.22 7.19 6.87
AIME-24 11.25 12.5 2.29 0.42 1.46 0.41
AIME-25 9.58 8.12 1.25 1.25 0.0 0.21
Science
GPQA 26.34 27.68 26.26 28.19 26.59 26.76
GPQA_Diamond 35.19 33.33 25.59 21.55 25.08 31.31
MMLU-Pro 37.8 23.54 28.35 14.46 16.2 18.49
MMLU-stem 64.13 54.3 54.04 35.39 39.16 39.64
Code
HumanEval 68.29 67.68 56.1 40.85 34.15 22.56
HumanEval+ 61.59 60.96 50.61 37.2 29.88 20.73
MBPP 64.81 58.73 64.81 57.67 33.6 20.63
MBPP+ 56.35 49.74 56.08 50.0 29.37 17.2
LiveCodeBench 17.61 14.87 12.52 5.09 2.35 0.78
CRUXEval 39.57 18.88 34.76 12.7 0.06 15.58
Instruction Following
IFEval 80.66 70.77 45.33 61.48 55.34 54.26
Alpaca-Eval 28.18 21.89 9.54 17.87 9.38 6.98
MTBench 8.46 7.61 7.1 7.03 6.37 6.03
LiveBench 34.13 40.73 21.65 18.79 14.97 14.1

You can check more in detail on our our release blogpost, detailed benchmarks.

Useful links

Citation

If the Falcon-H1 family of models were helpful to your work, feel free to give us a cite.

@misc{tiifalconh1,
    title = {Falcon-H1},
    author = {Falcon-LLM Team},
    month = {May},
    url = {https://falcon-lm.github.io/blog/falcon-h1},
    year = {2025}
}
Downloads last month
155
GGUF
Model size
1.56B params
Architecture
falcon-h1
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for tiiuae/Falcon-H1-1.5B-Instruct-GGUF

Quantized
(3)
this model

Collection including tiiuae/Falcon-H1-1.5B-Instruct-GGUF