ejschwartz commited on
Commit
2f590bd
·
1 Parent(s): 775702a

More editing

Browse files
Files changed (1) hide show
  1. README.md +12 -12
README.md CHANGED
@@ -65,13 +65,13 @@ The REALTYPE dataset was created to address key limitations in existing neural d
65
 
66
  The REALTYPE dataset was built by cloning and compiling majority-C-language repositories from GitHub using the GitHub Cloner and Compiler (GHCC) tool. The authors followed these key steps:
67
 
68
- Executed standard build configuration scripts and extracted resulting ELF-format binary files.
69
- Used Hex-Rays decompiler to decompile each binary.
70
- Processed original source code by running gcc's preprocessor (gcc -E -P) on the repository archives.
71
- Parsed the preprocessed code, tracking typedef aliases and recording the definitions of UDTs.
72
- Extracted and canonicalized each function by traversing the function's AST and recording all type descriptors.
73
- Matched preprocessed functions with decompiled functions and organized them by the binary in which they occur.
74
- Computed and stored the call graph between functions in each binary.
75
 
76
  For deduplication, the authors used both minhashing (to cluster similar text files) and by-project splitting (ensuring all data from a given repository ends up entirely in either the train, validation, or test set).
77
 
@@ -79,11 +79,11 @@ For deduplication, the authors used both minhashing (to cluster similar text fil
79
 
80
  The REALTYPE dataset has several technical limitations acknowledged in the paper:
81
 
82
- The dataset is biased toward open-source projects on GitHub that could be successfully built. As the authors note, "It is possible that some decompilation targets, especially malware, may have systematic differences from our data and thus affect performance."
83
- The dataset only includes unoptimized (-O0) code. The authors mention that "optimizations cause a small-to-moderate decrease in efficacy" in neural decompilation but investigating this would have required excessive computational resources.
84
- The dataset does not include obfuscated code, which is common in malware. The authors view deobfuscation as "an orthogonal problem" that could be addressed by separate techniques before neural decompilation.
85
- There is a potential risk of data leakage through pretraining, though the authors believe this risk is small because "relatively little decompiled code is found on GitHub or on the internet in general."
86
- The dataset is focused on C code and might not generalize well to other programming languages.
87
 
88
  ## Dataset Card Contact
89
 
 
65
 
66
  The REALTYPE dataset was built by cloning and compiling majority-C-language repositories from GitHub using the GitHub Cloner and Compiler (GHCC) tool. The authors followed these key steps:
67
 
68
+ 1. Executed standard build configuration scripts and extracted resulting ELF-format binary files.
69
+ 2. Used Hex-Rays decompiler to decompile each binary.
70
+ 3. Processed original source code by running gcc's preprocessor (`gcc -E -P`) on the repository archives.
71
+ 4. Parsed the preprocessed code, tracking typedef aliases and recording the definitions of UDTs.
72
+ 5. Extracted and canonicalized each function by traversing the function's AST and recording all type descriptors.
73
+ 6. Matched preprocessed functions with decompiled functions and organized them by the binary in which they occur.
74
+ 7. Computed and stored the call graph between functions in each binary.
75
 
76
  For deduplication, the authors used both minhashing (to cluster similar text files) and by-project splitting (ensuring all data from a given repository ends up entirely in either the train, validation, or test set).
77
 
 
79
 
80
  The REALTYPE dataset has several technical limitations acknowledged in the paper:
81
 
82
+ * The dataset is biased toward open-source projects on GitHub that could be successfully built. As the authors note, "It is possible that some decompilation targets, especially malware, may have systematic differences from our data and thus affect performance."
83
+ * The dataset only includes unoptimized (-O0) code. The authors mention that "optimizations cause a small-to-moderate decrease in efficacy" in neural decompilation but investigating this would have required excessive computational resources.
84
+ * The dataset does not include obfuscated code, which is common in malware. The authors view deobfuscation as "an orthogonal problem" that could be addressed by separate techniques before neural decompilation.
85
+ * There is a potential risk of data leakage through pretraining, though the authors believe this risk is small because "relatively little decompiled code is found on GitHub or on the internet in general."
86
+ * The dataset is focused on C code and might not generalize well to other programming languages.
87
 
88
  ## Dataset Card Contact
89