Papers
arxiv:2502.11901

Building A Proof-Oriented Programmer That Is 64% Better Than GPT-4o Under Data Scarsity

Published on Feb 17
· Submitted by shizhuo2 on Feb 18
Authors:
,
,

Abstract

Existing LMs struggle with proof-oriented programming due to data scarcity, which manifest in two key ways: (1) a lack of sufficient corpora for proof-oriented programming languages such as F*, and (2) the absence of large-scale, project-level proof-oriented implementations that can teach the model the intricate reasoning process when performing proof-oriented programming. We present the first on synthetic data augmentation for project level proof oriented programming for both generation and repair. Our method addresses data scarcity by synthesizing basic proof-oriented programming problems for proficiency in that language; incorporating diverse coding data for reasoning capability elicitation and creating new proofs and repair data within existing repositories. This approach enables language models to both synthesize and repair proofs for function- and repository-level code. We show that our fine-tuned 14B parameter model, PoPilot, can exceed the performance of the models that outperforms GPT-4o in project-level proof-oriented programming by 64% relative margin, and can improve GPT-4o's performance by 54% by repairing its outputs over GPT-4o's self-repair.

Community

Paper submitter
edited 4 days ago

This work pioneers in repository-level proof data synthesis for proof-oriented programming in F* for software security.
Formal verification is vital as U.S. policy tightens AI safety and cybersecurity regulations. It ensures system correctness, preventing failures and vulnerabilities in critical applications like defense and finance.
Yet, existing LLMs lack such capabilities due to the extreme low data nature of this paradigm. We therefore investigate how one could train a model better under such constraints.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2502.11901 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2502.11901 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.11901 in a Space README.md to link it from this page.

Collections including this paper 2