Guaranteed Guess: A Language Modeling Approach for CISC-to-RISC Transpilation with Testing Guarantees
Abstract
A novel ISA-centric transpilation pipeline using LLMs and software testing achieves high correctness and efficiency in translating between complex and reduced hardware architectures.
The hardware ecosystem is rapidly evolving, with increasing interest in translating low-level programs across different instruction set architectures (ISAs) in a quick, flexible, and correct way to enhance the portability and longevity of existing code. A particularly challenging class of this transpilation problem is translating between complex- (CISC) and reduced- (RISC) hardware architectures, due to fundamental differences in instruction complexity, memory models, and execution paradigms. In this work, we introduce GG (Guaranteed Guess), an ISA-centric transpilation pipeline that combines the translation power of pre-trained large language models (LLMs) with the rigor of established software testing constructs. Our method generates candidate translations using an LLM from one ISA to another, and embeds such translations within a software-testing framework to build quantifiable confidence in the translation. We evaluate our GG approach over two diverse datasets, enforce high code coverage (>98%) across unit tests, and achieve functional/semantic correctness of 99% on HumanEval programs and 49% on BringupBench programs, respectively. Further, we compare our approach to the state-of-the-art Rosetta 2 framework on Apple Silicon, showcasing 1.73x faster runtime performance, 1.47x better energy efficiency, and 2.41x better memory usage for our transpiled code, demonstrating the effectiveness of GG for real-world CISC-to-RISC translation tasks. We will open-source our codes, data, models, and benchmarks to establish a common foundation for ISA-level code translation research.
Community
Guaranteed Guess is a powerful LLM-based transpiler that translates x86 (CISC) assembly into efficient ARM and RISC-V (RISC) code with test-verified correctness. Unlike prior methods relying on emulation or decompilation, GG directly generates native assembly with up to 99.4% correctness on HumanEval and 49.2% on BringUpBench, enforced by rigorous unit tests. Trained on over 1.3M low-level programs, and enhanced with architecture-aware tokenization and long-context reasoning, GG outperforms Rosetta 2 with 1.73× faster execution, 1.47× better energy efficiency, and 2.41× lower memory usage, offering a practical path to fast and portable binary translation.
Models citing this paper 6
Browse 6 models citing this paperDatasets citing this paper 13
Browse 13 datasets citing this paperSpaces citing this paper 0
No Space linking this paper