drawing

Table of Contents

  1. TL;DR
  2. Model Details
  3. Training Details
  4. Usage
  5. Evaluation
  6. Citation

TL;DR

Model Details

Model Description

  • Developed by: https://www.tii.ae
  • Model type: Causal decoder-only
  • Architecture: Hybrid Transformers + Mamba architecture
  • Language(s) (NLP): English, Multilingual
  • License: Falcon-LLM License

Training details

For more details about the training protocol of this model, please refer to the Falcon-H1 technical blogpost.

Usage

Currently to use this model you can either rely on Hugging Face transformers, vLLM or our custom fork of llama.cpp library.

Inference

Make sure to install the latest version of transformers or vllm, eventually install these packages from source:

pip install git+https://github.com/huggingface/transformers.git

Refer to the official vLLM documentation for more details on building vLLM from source.

πŸ€— transformers

Refer to the snippet below to run H1 models using πŸ€— transformers:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "tiiuae/Falcon-H1-1B-Base"

model = AutoModelForCausalLM.from_pretrained(
  model_id,
  torch_dtype=torch.bfloat16,
  device_map="auto"
)

# Perform text generation

vLLM

For vLLM, simply start a server by executing the command below:

# pip install vllm
vllm serve tiiuae/Falcon-H1-1B-Instruct --tensor-parallel-size 2 --data-parallel-size 1

llama.cpp

While we are working on integrating our architecture directly into llama.cpp library, you can install our fork of the library and use it directly: https://github.com/tiiuae/llama.cpp-Falcon-H1 Use the same installing guidelines as llama.cpp.

Evaluation

Falcon-H1 series perform very well on a variety of tasks, including reasoning tasks.

Tasks Falcon-H1-1.5B-deep Qwen3-1.7B Qwen2.5-1.5B Gemma3-1B Llama3.2-1B Falcon3-1B
General
BBH 54.43 35.18 42.41 35.86 33.21 34.47
ARC-C 43.86 34.81 40.53 34.13 34.64 43.09
TruthfulQA 50.48 49.39 47.05 42.17 42.08 42.31
HellaSwag 65.54 49.27 62.23 42.24 55.3 58.53
MMLU 66.11 57.04 59.76 40.87 45.93 46.1
Math
GSM8k 82.34 69.83 57.47 42.38 44.28 44.05
MATH-500 77.8 73.0 48.4 45.4 13.2 19.8
AMC-23 56.56 46.09 24.06 19.22 7.19 6.87
AIME-24 14.37 12.5 2.29 0.42 1.46 0.41
AIME-25 11.04 8.12 1.25 1.25 0.0 0.21
Science
GPQA 33.22 27.68 26.26 28.19 26.59 26.76
GPQA_Diamond 40.57 33.33 25.59 21.55 25.08 31.31
MMLU-Pro 41.89 23.54 28.35 14.46 16.2 18.49
MMLU-stem 67.3 54.3 54.04 35.39 39.16 39.64
Code
HumanEval 73.78 67.68 56.1 40.85 34.15 22.56
HumanEval+ 68.9 60.96 50.61 37.2 29.88 20.73
MBPP 68.25 58.73 64.81 57.67 33.6 20.63
MBPP+ 56.61 49.74 56.08 50.0 29.37 17.2
LiveCodeBench 23.87 14.87 12.52 5.09 2.35 0.78
CRUXEval 52.32 18.88 34.76 12.7 0.06 15.58
Instruction Following
IFEval 83.5 70.77 45.33 61.48 55.34 54.26
Alpaca-Eval 27.12 21.89 9.54 17.87 9.38 6.98
MTBench 8.53 7.61 7.1 7.03 6.37 6.03
LiveBench 36.83 40.73 21.65 18.79 14.97 14.1

You can check more in detail on our our release blogpost, detailed benchmarks.

Useful links

Citation

If the Falcon-H1 family of models were helpful to your work, feel free to give us a cite.

@misc{tiifalconh1,
    title = {Falcon-H1: A Family of Hybrid-Head Language Models Redefining Efficiency and Performance},
    url = {https://falcon-lm.github.io/blog/falcon-h1},
    author = {Falcon-LLM Team},
    month = {May},
    year = {2025}
}
Downloads last month
83
Safetensors
Model size
1.55B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for tiiuae/Falcon-H1-1.5B-Deep-Instruct

Finetuned
(1)
this model
Quantizations
3 models

Space using tiiuae/Falcon-H1-1.5B-Deep-Instruct 1

Collection including tiiuae/Falcon-H1-1.5B-Deep-Instruct