Kimina-Prover-Preview-Distill-7B

image/png

Kimina-Prover-Preview-Distill-7B is a theorem proving model developed by Project Numina and Kimi teams, focusing on competition style problem solving capabilities in Lean 4. It is a distillation of Kimina-Prover-Preview, a model trained via large scale reinforcement learning. It achieves SOTA results on MiniF2F-test in its model size and compute budget and ranks #1 in PutnamBench Leaderboard with a small compute budget at the time of release.

Quick Start with vLLM

You can easily do inference using vLLM:

from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_name = "AI-MO/Kimina-Prover-Preview-Distill-7B"
model = LLM(model_name)

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)

problem = "The volume of a cone is given by the formula $V = \frac{1}{3}Bh$, where $B$ is the area of the base and $h$ is the height. The area of the base of a cone is 30 square units, and its height is 6.5 units. What is the number of cubic units in its volume?"

formal_statement = """import Mathlib
import Aesop

set_option maxHeartbeats 0

open BigOperators Real Nat Topology Rat

/-- The volume of a cone is given by the formula $V = \frac{1}{3}Bh$, where $B$ is the area of the base and $h$ is the height. The area of the base of a cone is 30 square units, and its height is 6.5 units. What is the number of cubic units in its volume? Show that it is 65.-/
theorem mathd_algebra_478 (b h v : ℝ) (h₀ : 0 < b ∧ 0 < h ∧ 0 < v) (h₁ : v = 1 / 3 * (b * h))
    (h₂ : b = 30) (h₃ : h = 13 / 2) : v = 65 := by
"""

prompt = "Think about and solve the following problem step by step in Lean 4."
prompt += f"\n# Problem:{problem}"""
prompt += f"\n# Formal statement:\n```lean4\n{formal_statement}\n```\n"

messages = [
    {"role": "system", "content": "You are an expert in mathematics and Lean 4."},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

sampling_params = SamplingParams(temperature=0.6, top_p=0.95, max_tokens=8096)
output = model.generate(text, sampling_params=sampling_params)
output_text = output[0].outputs[0].text
print(output_text)

Citation

If you find our work helpful, you can cite our paper: https://github.com/MoonshotAI/Kimina-Prover-Preview

Downloads last month
213
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 1 Ask for provider support

Model tree for AI-MO/Kimina-Prover-Preview-Distill-7B

Base model

Qwen/Qwen2.5-7B
Finetuned
(68)
this model
Quantizations
1 model

Collection including AI-MO/Kimina-Prover-Preview-Distill-7B