Building A Proof-Oriented Programmer That Is 64% Better Than GPT-4o Under Data Scarsity
Abstract
Existing LMs struggle with proof-oriented programming due to data scarcity, which manifest in two key ways: (1) a lack of sufficient corpora for proof-oriented programming languages such as F*, and (2) the absence of large-scale, project-level proof-oriented implementations that can teach the model the intricate reasoning process when performing proof-oriented programming. We present the first on synthetic data augmentation for project level proof oriented programming for both generation and repair. Our method addresses data scarcity by synthesizing basic proof-oriented programming problems for proficiency in that language; incorporating diverse coding data for reasoning capability elicitation and creating new proofs and repair data within existing repositories. This approach enables language models to both synthesize and repair proofs for function- and repository-level code. We show that our fine-tuned 14B parameter model, PoPilot, can exceed the performance of the models that outperforms GPT-4o in project-level proof-oriented programming by 64% relative margin, and can improve GPT-4o's performance by 54% by repairing its outputs over GPT-4o's self-repair.
Community
This work pioneers in repository-level proof data synthesis for proof-oriented programming in F* for software security.
Formal verification is vital as U.S. policy tightens AI safety and cybersecurity regulations. It ensures system correctness, preventing failures and vulnerabilities in critical applications like defense and finance.
Yet, existing LLMs lack such capabilities due to the extreme low data nature of this paradigm. We therefore investigate how one could train a model better under such constraints.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- HumanEval Pro and MBPP Pro: Evaluating Large Language Models on Self-invoking Code Generation (2024)
- Repository Structure-Aware Training Makes SLMs Better Issue Resolver (2024)
- Theorem Prover as a Judge for Synthetic Data Generation (2025)
- Focused-DPO: Enhancing Code Generation Through Focused Preference Optimization on Error-Prone Points (2025)
- RAG-Verus: Repository-Level Program Verification with LLMs using Retrieval Augmented Generation (2025)
- SURGE: On the Potential of Large Language Models as General-Purpose Surrogate Code Executors (2025)
- Proving the Coding Interview: A Benchmark for Formally Verified Code Generation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper