Welcome to the xGen-small family!

xGen-small (blog, arXiv) is an enterprise-ready compact LM that combines domain-focused data curation, scalable pre-training, length-extension, and RL fine-tuning to deliver long-context performance at predictable, low cost. This model release is for research purposes only.

Model Series

xGen-small comes in two sizes (4B and 9B) with two variants (pre-trained and post-trained):

Model # Total Params Context Length Variant Download
salesforce/xgen-small-4B-base-r 4B 128k Pre-trained ๐Ÿค— Link
salesforce/xgen-small-4B-instruct-r 4B 128k Post-trained ๐Ÿค— Link
salesforce/xgen-small-9B-base-r 9B 128k Pre-trained ๐Ÿค— Link
salesforce/xgen-small-9B-instruct-r 9B 128k Post-trained ๐Ÿค— Link

Usage

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Salesforce/xgen-small-9B-instruct-r"
tokenizer = AutoTokenizer.from_pretrained(model_name)
device = "cuda" if torch.cuda.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto"
).to(device)

prompt = "What is Salesforce?"
messages = [{"role": "user", "content": prompt}]
inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)

generated = model.generate(inputs, max_new_tokens=128)
output = tokenizer.decode(
    generated[0],
    skip_special_tokens=True,
)
print(output)

Evaluation

Category Task Llama 3.1-8B Granite 3.3-8B Qwen2.5-7B xGen-small 9B Instruct
General Knowledge & Reasoning MMLU 68.3 62.7 72.4 72.4
General Knowledge & Reasoning MMLU-Pro 43.2 43.5 56.7 57.3
Chat Arena-Hard-v1.0 28.9 30.5 48.1 60.1
Chat MT-Bench 8.25 8.57 8.56 8.90
Math & Science GPQA 31.9 35.3 32.6 45.8
Math & Science GSM8K 84.2 89.4 91.9 95.3
Math & Science MATH 48.9 70.9 74.6 91.6
Math & Science AIME 2024 6.7 10.0 6.7 50.0
Coding HumanEval+ 61.6 65.9 74.4 78.7
Coding MBPP+ 55.3 60.3 68.8 63.8
Coding LiveCodeBench 10.3 10.3 12.1 50.6

Citation

@misc{xgensmall,
      title={xGen-small Technical Report}, 
      author={Erik Nijkamp and Bo Pang and Egor Pakhomov and Akash Gokul and Jin Qu and Silvio Savarese and Yingbo Zhou and Caiming Xiong},
      year={2025},
      eprint={2505.06496},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.06496}, 
}

Ethical Considerations

This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people's lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP.

Model Licenses

The models are being released under CC-BY-NC-4.0, Copyright ยฉ Salesforce, Inc. All Rights Reserved.

Downloads last month
6
Safetensors
Model size
10.7B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Salesforce/xgen-small-9B-instruct-r

Quantizations
1 model