File size: 18,742 Bytes
17fd5e5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e21dd1e
 
 
 
 
 
 
 
 
 
 
b838bf0
 
3bf188a
b838bf0
d4e635b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17fd5e5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3bf188a
17fd5e5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
04adf96
c906fd0
04adf96
c906fd0
04adf96
17fd5e5
 
 
 
 
 
 
 
 
 
 
 
04adf96
17fd5e5
 
 
 
 
 
 
 
 
 
 
 
 
 
04adf96
c906fd0
04adf96
c906fd0
04adf96
17fd5e5
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
---
license: cc-by-nc-4.0
task_categories:
- tabular-regression
- tabular-classification
language:
- en
tags:
- FPGA
- hls4ml
- HLS
- VHDL
- GNN
- Transformer
- Benchmark
- Latency
- Resource
- Estimation
- Regression
pretty_name: wa-hls4ml Benchmark Dataset
size_categories:
- 100K<n<1M
configs:
- config_name: default
  data_files:
  - split: train
    path: "train/*.json"
  - split: test
    path: "test/*.json"
  - split: validation
    path: "val/*.json"
  - split: exemplar
    path: "exemplar/*.json"
- config_name: model-type
  data_files:
  - split: rule4ml
    path: 
     - "train/*2layer*.json"
     - "test/*2layer*.json"
     - "val/*2layer*.json"
  - split: 2layer
    path:
     - "train/*2layer*.json"
     - "test/*2layer*.json"
     - "val/*2layer*.json"
  - split: 3layer
    path:
     - "train/*3layer*.json"
     - "test/*3layer*.json"
     - "val/*3layer*.json"
  - split: conv1d
    path: 
     - "train/*conv1d*.json"
     - "test/*conv1d*.json"
     - "val/*conv1d*.json"
  - split: conv2d
    path: 
     - "train/*conv2d*.json"
     - "test/*conv2d*.json"
     - "val/*conv2d*.json"
  - split: latency
    path: 
     - "train/*latency*.json"
     - "test/*latency*.json"
     - "val/*latency*.json"
  - split: resource
    path:
     - "train/*resource*.json"
     - "test/*resource*.json"
     - "val/*resource*.json"
  - split: exemplar
    path: "exemplar/*.json"
---

# Dataset Card for wa-hls4ml Benchmark Dataset

The wa-hls4ml resource and latency estimation benchmark dataset

## Dataset Details
We introduce **wa-hls4ml**[^1]: a **dataset** unprecedented in scale and features and a **benchmark** for common evaluation

The open **dataset** is unprecedented in terms of its size, with over 680\,000 fully synthesized dataflow models.
The goal is to continue to grow and extend the dataset over time.
We include all steps of the synthesis chain from ML model to HLS representation to register-transfer level (RTL) and save the full logs.
This will enable a much broader set of applications beyond those in this paper.  
The **benchmark** standardizes evaluation of the performance of resource usage and latency estimators across a suite of metrics, such as the coefficient of determination (R^2), symmetric mean absolute percentage error (SMAPE), and root mean square error (RMSE), and provides sample models, both synthetic and from scientific applications, to support and encourage the continued development of better surrogate models.

[^1]: Named after Wario and Waluigi who are doppelgΓ€ngers of Mario and Luigi, respectively, in the Nintendo Super Mario platform game series.

### Dataset Description

The dataset has two primary components, each designed to test different aspects of a surrogate model's performance.  
The first part is based on *synthetic* neural networks generated with various layer types, micro-architectures, and precisions.  
This synthetic dataset lets us systematically explore the FPGA resources and latencies as we vary different model parameters.  
The second part of the benchmark targets models from *exemplar realistic* scientific applications, requiring real-time processing at the edge, near the data sources.  
Models with real-time constraints constitute a primary use case for ML-to-FPGA pipelines like hls4ml.  
This part tests the ability of the surrogate model to extrapolate its predictions to new configurations and architectures beyond the training set, assessing the model's robustness and performance for real applications.  

#### Exemplar Model Descriptions

- **Jet**: A fully connected neural network that classifies simulated particle jets originating from one of five particle classes in high-energy physics experiments.
- **Top Quarks**: A binary classifier for top quark jets, helping probe fundamental particles and their interactions.
- **Anomaly**: An autoencoder trained on audio data to reproduce the input spectrogram, whose loss value differentiates between normal and abnormal signals.
- **BiPC**: An encoder that transforms high-resolution images, producing sparse codes for further compression.
- **CookieBox**: Dedicated to real-time data acquisition for the CookieBox system, designed for advanced experimental setups requiring rapid handling of large data volumes generated by high-speed detectors.
- **AutoMLP**: A fully connected network from the AutoMLP framework, focusing on accelerating MLPs on FPGAs, providing significant improvements in computational performance and energy efficiency.
- **Particle Tracking**: Tracks charged particles in real-time as they traverse silicon detectors in large-scale particle physics experiments.

#### Exemplar Model Architectures

| **Model**               | **Size** | **Input** | **Architecture**                                                                 |
|--------------------------|----------|-----------|----------------------------------------------------------------------------------|
| **Jet**                 | 2,821    | 16        | β†’[ReLU]32 β†’[ReLU]32 β†’[ReLU]32 β†’[Softmax]5                                       |
| **Top Quarks**          | 385      | 10        | β†’[ReLU]32 β†’[Sigmoid]1                                                           |
| **Anomaly**             | 2,864    | 128       | β†’[ReLU]8 β†’[ReLU]4 β†’[ReLU]128 β†’[ReLU]4 β†’[Softmax]128                              |
| **BiPC**                | 7,776    | 36        | β†’[ReLU]36 β†’[ReLU]36 β†’[ReLU]36 β†’[ReLU]36 β†’[ReLU]36                                |
| **CookieBox**           | 3,433    | 512       | β†’[ReLU]4 β†’[ReLU]32 β†’[ReLU]32 β†’[Softmax]5                                        |
| **AutoMLP**             | 534      | 7         | β†’[ReLU]12 β†’[ReLU]16 β†’[ReLU]12 β†’[Softmax]2                                       |
| **Particle Tracking**   | 2,691    | 14        | β†’[ReLU]32 β†’[ReLU]32 β†’[ReLU]32 β†’[Softmax]3                                       |


- **Curated by:** Fast Machine Learning Lab
- **Funded by:** See "Acknowledgements" in the paper for full funding details
- **Language(s) (NLP):** English
- **License:** cc-by-nc-4.0

### Dataset Sources

The Dataset was consists of data generated by the authors using the following methods:

#### Generation of Synthetic Data 

The train, validation, and test sets were created by first generating models of varying architectures in the Keras and QKeras Python libraries, varying their hyperparameters.  
The updated rule4ml dataset follows the same generation method and hyperparameter ranges described in prior work, while adding II information and logic synthesis results to the reports.  

For the remaining subsets of the data, the two-layer and three-layer fully-connected models were generated using a grid search method according to the parameter ranges mentioned below, whereas larger fully-connected models and convolutional models (one- and two-dimensional) were randomly generated, where convolutional models also contain dense, flatten, and pooling layers.  
The weight and bias precision was implemented in HLS as datatype `ap_fixed<X,1>`, where `X` is the specified precision and the total number of bits allocated to the weight and bias values, with one bit being reserved for the integer portion of the value.  
These models were then converted to HLS using hls4ml and synthesized through AMD Vitis version 2023.2 and 2024.2, targeting the AMD Xilinx Alveo U250 FPGA board.  
The model sets have the following parameter ranges:  

- **Number of layers**: 2–7 for fully-connected models; 3–7 for convolutional models.  
- **Activation functions**: Linear for most 2–3 layer fully-connected models; ReLU, tanh, and sigmoid for all other fully-connected models and convolutional models.  
- **Number of features/neurons**: 8–128 (step size: 8 for 2–3 layer) for fully-connected models; 32–128 for convolution models with 8–64 filters.  
- **Weight and bias bit precision**: 2–16 bits (step size: 2) for 2–3 layer fully-connected models, 4–16 bits (step size: powers of 2) for 3–7 layer fully-connected and convolutional models.  
- **hls4ml target reuse factor**: 1–4093 for fully-connected models; 8192–32795 for convolutional models.  
- **hls4ml implementation strategy**: Resource strategy, which controls the degree of parallelism by explicitly specifying the number of MAC operations performed in parallel per clock cycle, is used for most fully-connected models and all convolutional models, while Latency strategy, where the computation is unrolled, is used for some 3–7 layer fully-connected models.  
- **hls4ml I/O type**: The io_parallel setting, which directly wires the output of one layer to the input of the next layer, is used for all fully-connected models, and the io_stream setting, which uses FIFO buffers between layers, is used for all convolutional models.  

#### Exemplar Model Synthesis Parameters
The exemplar models were synthesized with the following parameters:

| **Hyperparameter**       | **Values**                                                                 |
|---------------------------|---------------------------------------------------------------------------|
| **Precision**             | `ap_fixed<2,1>`, `ap_fixed<8,3>`, `ap_fixed<16,6>`                       |
| **Strategy**              | `Latency`, `Resource`                                                    |
| **Target reuse factor**   | 1, 128, 1024                                                             |
| **Target board**          | Alveo U200, Alveo U250                                                   |
| **Target clock**          | 5 ns, 10 ns                                                              |
| **Vivado version**        | 2019.1, 2020.1                                                           |

The synthesis was repeated multiple times, varying the hls4ml *reuse factor*, a tunable setting that proportionally limits the number of multiplication operations used.  
The hls4ml conversion, HLS synthesis, and logic synthesis of the train and test sets were all performed in parallel on the National Research Platform Kubernetes Hypercluster and the Texas A&M ACES HPRC Cluster.  
On the National Research Platform, synthesis was run inside a container with a guest OS of Ubuntu 20.04.4 LTS, the containers being slightly modified versions of the xilinx-docker v2023.2 "user" images, with 3 virtual CPU cores and 16 GB of RAM per pod, with all AMD tools mounted through a Ceph-based persistent volume.  
Jobs run on the Texas A&M ACES HPRC Cluster were run using Vitis 2024.2, each with 2 virtual CPU cores and 32 GB of RAM.  

The resulting projects, reports, logs, and a JSON file containing the resource/latency usage and estimates of the C and logic synthesis were collected for each sample in the dataset.  
The data, excluding the projects and logs, were then further processed into a collection of JSON files, distributed alongside this paper and described below.

- **Repository:** [fastmachinelearning/wa-hls4ml-paper](https://github.com/fastmachinelearning/wa-hls4ml-paper)
- **Paper [optional]:** [In Review]

---

## Uses

This dataset is inteded to be used to train surrogate models for the purpose of estimating resource utilization and latency of neural networks that are implemented on hardware (FPGAs).

### Direct Use

This dataset is generated using the tool hls4ml, and should be used to train surrogate models and/or other models for use with the hls4ml workflow. 

### Out-of-Scope Use

As this dataset is generated using the hls4ml tool, it should not be used to train surrogate models for other tools, as results and implementation details may vary across those tools compared to hls4ml. 

---

## Dataset Structure

The training, validation, and test sets of the benchmark currently consist of 683,176 synthetic samples, consisting of data about synthesized samples of 608,679 fully-connected neural networks, 31,278 one-dimensional convolutional neural networks, and 43,219 two-dimensional convolutional neural networks.  
Each sample contains the model architecture, hls4ml conversion parameters, the latency and resource usage numbers for that network post-logic synthesis, and associated metadata.  
In addition to the training, validation, and test sets, the dataset also includes 887 samples representing the successful logic synthesis of the exemplar models with varying conversion parameters.  
The dataset as a whole is split, distributed, and intended to be used as follows:  

- **Training set**: The set of 478,220 samples intended to be used for training a given estimator.  
- **Validation set**: The set of 102,472 samples intended to be used during training for validation purposes.  
- **Test set**: The set of 102,484 samples intended to be used for testing and generating results for a given estimator.  
- **Exemplar test set**: The set of 887 samples, comprising the models described in the benchmark, intended to be used for testing and generating results for a given estimator.  

Within each subset, excluding the exemplar test set, the data is further grouped as follows.  
These categories explain the composition of our dataset but have no bearing on how a given estimator should be trained.  

- **2_20 (rule4ml)**: The updated rule4ml dataset, containing fully-connected neural networks that were randomly generated with layer counts between 2 and 20 layers, using hls4ml resource and latency strategies.  
- **2_layer**: A subset containing 2-layer deep fully-connected neural networks generated via a grid search using hls4ml resource and io_parallel strategies.  
- **3_layer**: A subset containing 3-layer deep fully-connected neural networks generated via a grid search using hls4ml resource and io_parallel strategies.  
- **conv1d**: A subset containing 3–7 layer deep 1-dimensional convolutional neural networks that were randomly generated and use hls4ml resource and io_stream strategies.  
- **conv2d**: A subset containing 3–7 layer deep 2-dimensional convolutional neural networks that were randomly generated and use hls4ml resource and io_stream strategies.  
- **latency**: A subset containing 3–7 layer deep fully-connected neural networks that were randomly generated and use hls4ml latency and io_parallel strategies.  
- **resource**: A subset containing 3–7 layer deep fully-connected neural networks that were randomly generated and use hls4ml resource and io_parallel strategies.  

### Structure of JSON Files

The distributed JSON files contain 683,176 total samples. The samples are split into three subsets, as described in the dataset section. The format across the three subsets is the same, where each sample is an object in JSON file, each sample containing 9 fields:

- **meta_data**: A unique identifier, model name, and name of the corresponding gzipped tarball of the fully synthesized project, logs, and reports for the sample (contained in an accompanying dataset released alongside the primary dataset).
- **model_config**: A JSON representation of the Keras/QKeras model synthesized in the sample, including the actual reuse factor as synthesized per layer.
- **hls_config**: The hls4ml configuration dictionary used to convert the model for the sample, including the target reuse factor as synthesized per layer.
- **resource_report**: A report of the post-logic synthesis resources used for the sample, reported as the actual number of components used.
- **hls_resource_report**: A report of the post-hls synthesis resources estimated for the sample, reported as the actual number of components estimated.
- **latency_report**: A report of the post-hls synthesis latency estimates for the sample.
- **target_part**: The FPGA part targeted for HLS and logic synthesis for the sample.
- **vivado_version**: The version of Vivado used to synthesize the sample.
- **hls4ml_version**: The version of hls4ml used to convert the sample.

---

## Curation Rationale

With the introduction of ML into FPGA toolchains, e.g., for resource and latency prediction or code generation, there is a significant need for large datasets to support and train these tools.  
We found that existing datasets were insufficient for these needs, and therefore sought to build a dataset and a highly scalable data generation framework that is useful for a wide variety of research surrounding ML on FPGAs.  
This dataset serves as one of the few openly accessible, large-scale collections of synthesized neural networks available for ML research.  

### Exemplar Realistic Models

The exemplar models utilized in this study include several key architectures, each tailored for specific ML tasks and targeting scientific applications with low-latency constraints.

### Source Data

The data was generated via randomly generated neural networks and specifically selected exemplar models, converted into HLS Code via hls4ml, with the resulting latency values collected after performing C-Synthesis through Vivado/Vitis HLS on the resulting HLS Code, and resource values collected after performing logic synthesis through Vivado/Vitis on the resulting HDL Code.

### Who are the source data producers?
[Benjamin Hawks](https://orcid.org/0000-0001-5700-0288), Fermi National Accelerator Laboratory, USA

[Hamza Ezzaoui Rahali](https://orcid.org/0000-0002-0352-725X), University of Sherbrooke, Canada

[Mohammad Mehdi Rahimifar](https://orcid.org/0000-0002-6582-8322), University of Sherbrooke, Canada

### Personal and Sensitive Information

This data contains no personally identifiable or sensitive information except for the names/usernames of the authors in some file paths. 

## Bias, Risks, and Limitations

In it's inital form, a majority of this dataset is comprised of very small (2-3 layer) dense neural networks without activations. This should be considered when training a model on it, and appropriate measures should be taken to weight the data at training time. 
We intend to continuously update this dataset, addressing this imbalance over time as more data is generated. 

### Recommendations

Appropriate measures should be taken to weight the data to account for the dataset imbalance at training time. 

## Citation [optional]

Paper currently in review.

**BibTeX:**

[More Information Needed]

**APA:**

[More Information Needed]

## Dataset Card Authors
[Benjamin Hawks](https://orcid.org/0000-0001-5700-0288), Fermi National Accelerator Laboratory, USA

[Hamza Ezzaoui Rahali](https://orcid.org/0000-0002-0352-725X), University of Sherbrooke, Canada

[Mohammad Mehdi Rahimifar](https://orcid.org/0000-0002-6582-8322), University of Sherbrooke, Canada

## Dataset Card Contact

[[email protected]](mailto:[email protected])