Paper: ScaleDiff: Scaling Difficult Problems for Advanced Mathematical Reasoning

Code: https://github.com/QizhiPei/ScaleDiff

DiffGen-8B

This model is a fine-tuned version of Qwen/Qwen3-8B-Base.

Model description

DiffGen-8B is a specialized difficult problem generator developed as part of the ScaleDiff pipeline, an approach designed to scale the creation of challenging mathematical problems for advanced mathematical reasoning. The model is trained on a filtered dataset of difficult problems, enabling it to efficiently produce a vast number of new, complex mathematical problems. This process eliminates the need for complex, per-instance prompting and its associated high API costs, addressing the scarcity of high-quality, difficult training data for Large Reasoning Models (LRMs).

Intended uses & limitations

Intended Uses: DiffGen-8B is primarily intended for generating large-scale datasets of challenging mathematical problems. These generated problems are then used to augment training data for Large Reasoning Models (LRMs), thereby enhancing their mathematical reasoning capabilities. It serves as a crucial component in pipelines focused on improving LRM performance on difficult benchmarks by providing a continuous supply of intricate reasoning challenges.

Limitations: While DiffGen-8B excels at generating difficult problems, its primary scope is mathematical problem generation. The quality and relevance of the generated problems are further ensured through subsequent solution distillation and filtering steps within the broader ScaleDiff pipeline. Its performance may not be optimized for other general text generation tasks.

Training and evaluation data

DiffGen-8B is a fine-tuned version of Qwen/Qwen3-8B-Base. It was trained on a subset of difficult problems selected from the AM-Qwen3-Distilled dataset. This selection was performed efficiently using AdaptThink, an adaptive thinking model that perceives problem difficulty with only a single forward pass, eliminating the need for solutions during selection. The problems generated by DiffGen-8B contribute to the creation of the ScaleDiff-Math dataset.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • total_train_batch_size: 128
  • total_eval_batch_size: 64
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1.0

Training results

Framework versions

  • Transformers 4.52.0.dev0
  • Pytorch 2.6.0+cu124
  • Datasets 2.21.0
  • Tokenizers 0.21.1
Downloads last month
25
Safetensors
Model size
8.19B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for QizhiPei/DiffGen-8B

Base model

Qwen/Qwen3-8B-Base
Finetuned
(215)
this model
Quantizations
1 model

Collection including QizhiPei/DiffGen-8B