configs:
- config_name: by_function
data_files:
- split: train
path: by-function/train.json
- split: test
path: by-function/test.json
- split: validate
path: by-function/valid.json
- config_name: by_binary
data_files:
- split: train
path: by-binary/train.jsonl
- split: test
path: by-binary/test.jsonl
- split: validate
path: by-binary/valid.jsonl
Dataset Card for REALTYPE
This dataset is a slightly modified version of the REALTYPE dataset released as part of the paper Idioms: Neural Decompilation With Joint Code and Type Prediction. It was slightly modified to make it compatible with Huggingface's datasets, which struggles with complex JSON datasets.
Dataset Details
Dataset Description
The paper Idioms: Neural Decompilation With Joint Code and Type Prediction introduces a new approach to neural decompilation that jointly predicts code and user-defined types to improve readability and usability of decompiled code. To support this research, the authors constructed REALTYPE, a comprehensive dataset containing 154,301 training functions, 540 validation functions, and 2,322 test functions. Unlike existing benchmarks, REALTYPE includes a substantial number of user-defined types (UDTs) extracted from real-world C code repositories on GitHub. The dataset was carefully constructed to capture complete definitions of all UDTs for all functions by parsing preprocessed original source code, with special attention to maintaining call graph information to enable interprocedural analysis. REALTYPE underwent rigorous deduplication through minhashing and by-project splitting to prevent data leakage between training and testing sets. It represents a significant advancement over previous benchmarks like EXEBENCH and HUMANEVAL-DECOMPILE, which contained very few, if any, realistic UDTs, making REALTYPE particularly valuable for training neural decompilers that can handle the complexity of real-world code with sophisticated type definitions.
This dataset is slightly modified from the original to make it compatible with Huggingface's datasets, which struggles with complex JSON datasets.
Dataset Sources
- Repository: Idioms
- Paper: Idioms: Neural Decompilation With Joint Code and Type Prediction
Uses
Direct Use
The REALTYPE dataset is primarily intended for training and evaluating neural decompilers that can jointly predict code and user-defined types. As described in the Idioms paper, the dataset was specifically created to address the shortcomings of existing neural decompilation benchmarks which lack realistic user-defined types (UDTs). REALTYPE is suitable for training models that need to handle real-world code with complex type structures, particularly for security applications such as malware analysis, vulnerability research, and fixing legacy software without source code. The dataset provides paired examples of decompiled code and original source code with complete UDT definitions, making it valuable for research on improving the readability and semantic accuracy of decompiled code.
Dataset Structure
The REALTYPE dataset contains 154,301 training functions, 540 validation functions, and 2,322 test functions, all extracted from C code repositories on GitHub. Each example in the dataset consists of:
- Decompiled code: The output of running the Hex-Rays decompiler on compiled binaries.
- Original source code: The canonicalized form of the original function from source.
- User-defined type definitions: Complete definitions of all UDTs used in the function.
- Call graph information: Data about which functions call or are called by each function.
The original REALTYPE dataset was only organized by binary. This view can be accessed using the by_binary
configuration. This dataset also contains a by_function
configuration, which organizes the data by function. Most data is in both views, but the by_function
view does not contain the call graph or unmatched functions (decompiled functions for which the original source code could not be found).
Dataset Creation
Curation Rationale
The REALTYPE dataset was created to address key limitations in existing neural decompilation benchmarks: (1) a lack of variables with user-defined types and their definitions, and (2) insufficient information to build call graphs. The authors identified these limitations as critical gaps preventing neural decompilers from handling real-world code effectively. As the paper explains, "user-defined types (UDTs), such as structs, are widespread in real code" but "existing neural decompilers are not designed to predict the definitions of UDTs." The dataset was specifically designed to enable joint prediction of code and type definitions, addressing what the authors call the "scattered evidence problem" where only a subset of a UDT's fields are accessed within any given function.
Source Data
Data Collection and Processing
The REALTYPE dataset was built by cloning and compiling majority-C-language repositories from GitHub using the GitHub Cloner and Compiler (GHCC) tool. The authors followed these key steps:
- Executed standard build configuration scripts and extracted resulting ELF-format binary files.
- Used Hex-Rays decompiler to decompile each binary.
- Processed original source code by running gcc's preprocessor (
gcc -E -P
) on the repository archives. - Parsed the preprocessed code, tracking typedef aliases and recording the definitions of UDTs.
- Extracted and canonicalized each function by traversing the function's AST and recording all type descriptors.
- Matched preprocessed functions with decompiled functions and organized them by the binary in which they occur.
- Computed and stored the call graph between functions in each binary.
For deduplication, the authors used both minhashing (to cluster similar text files) and by-project splitting (ensuring all data from a given repository ends up entirely in either the train, validation, or test set).
Bias, Risks, and Limitations
The REALTYPE dataset has several technical limitations acknowledged in the paper:
- The dataset is biased toward open-source projects on GitHub that could be successfully built. As the authors note, "It is possible that some decompilation targets, especially malware, may have systematic differences from our data and thus affect performance."
- The dataset only includes unoptimized (-O0) code. The authors mention that "optimizations cause a small-to-moderate decrease in efficacy" in neural decompilation but investigating this would have required excessive computational resources.
- The dataset does not include obfuscated code, which is common in malware. The authors view deobfuscation as "an orthogonal problem" that could be addressed by separate techniques before neural decompilation.
- There is a potential risk of data leakage through pretraining, though the authors believe this risk is small because "relatively little decompiled code is found on GitHub or on the internet in general."
- The dataset is focused on C code and might not generalize well to other programming languages.
Dataset Card Contact
- Luke Dramko: Carnegie Mellon University ([email protected])
- Claire Le Goues: Carnegie Mellon University ([email protected])
- Edward J. Schwartz: Carnegie Mellon University Software Engineering Institute ([email protected])