Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 1 new columns ({'JSON File'}) and 1 missing columns ({'Model Name'}).

This happened while the csv dataset builder was generating data using

hf://datasets/fastmachinelearning/wa-hls4ml-projects/conv1d.csv (at revision ee716b9968be3844edf4d479fc1ba99b25776fdd)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1871, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 643, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2293, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2241, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              JSON File: string
              Artifact File: string
              Archive Name: string
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 631
              to
              {'Model Name': Value(dtype='string', id=None), 'Artifact File': Value(dtype='string', id=None), 'Archive Name': Value(dtype='string', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1431, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 992, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1742, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1873, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 1 new columns ({'JSON File'}) and 1 missing columns ({'Model Name'}).
              
              This happened while the csv dataset builder was generating data using
              
              hf://datasets/fastmachinelearning/wa-hls4ml-projects/conv1d.csv (at revision ee716b9968be3844edf4d479fc1ba99b25776fdd)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Model Name
string
Artifact File
string
Archive Name
string
f63305fde50b2f0143f70f53b0dcd777.json
f63305fde50b2f0143f70f53b0dcd777.tar.gz
archived/archive_1.tar.gz
ecf7f5a60f0ef8948e0f8b3c91eab2f1.json
ecf7f5a60f0ef8948e0f8b3c91eab2f1.tar.gz
archived/archive_1.tar.gz
95d04cc082ad5f9f1f402bbae9688506.json
95d04cc082ad5f9f1f402bbae9688506.tar.gz
archived/archive_1.tar.gz
b7f8ba9933ca67bf4e89522e4957ce7d.json
b7f8ba9933ca67bf4e89522e4957ce7d.tar.gz
archived/archive_1.tar.gz
d7b94ecf656c918028484434822b8852.json
d7b94ecf656c918028484434822b8852.tar.gz
archived/archive_1.tar.gz
892e3cb2977e08682856275c6e9f79a8.json
892e3cb2977e08682856275c6e9f79a8.tar.gz
archived/archive_1.tar.gz
ce91b1fd6da172feea7950d18e25fca8.json
ce91b1fd6da172feea7950d18e25fca8.tar.gz
archived/archive_1.tar.gz
6dd018731e28e52f924c6ef9a6343ca0.json
6dd018731e28e52f924c6ef9a6343ca0.tar.gz
archived/archive_1.tar.gz
d88815bd6e6863544e612d4e073e1429.json
d88815bd6e6863544e612d4e073e1429.tar.gz
archived/archive_1.tar.gz
19dfe9941ff8fab662a5f9ec1766698c.json
19dfe9941ff8fab662a5f9ec1766698c.tar.gz
archived/archive_1.tar.gz
da198146f82ac67e875c643dc300fa52.json
da198146f82ac67e875c643dc300fa52.tar.gz
archived/archive_1.tar.gz
dadb8e696c22f0bb3bf0849c237735ee.json
dadb8e696c22f0bb3bf0849c237735ee.tar.gz
archived/archive_1.tar.gz
382425a1b56ad3b61741d66488330243.json
382425a1b56ad3b61741d66488330243.tar.gz
archived/archive_1.tar.gz
8d2cdad9082523fa3ea68740ce1050fe.json
8d2cdad9082523fa3ea68740ce1050fe.tar.gz
archived/archive_1.tar.gz
cffafe382e4b4a7d65accd2fe34989a1.json
cffafe382e4b4a7d65accd2fe34989a1.tar.gz
archived/archive_1.tar.gz
2339f65766415309a19973f60b56847d.json
2339f65766415309a19973f60b56847d.tar.gz
archived/archive_1.tar.gz
92b3fe0b48ae880c256e7be27772a237.json
92b3fe0b48ae880c256e7be27772a237.tar.gz
archived/archive_1.tar.gz
c6ea1b2af0b4d9ce5fed222d049d93cf.json
c6ea1b2af0b4d9ce5fed222d049d93cf.tar.gz
archived/archive_1.tar.gz
70a2f6758eaeb733e720519d2e7285c0.json
70a2f6758eaeb733e720519d2e7285c0.tar.gz
archived/archive_1.tar.gz
c26968f68714ae3b753593e66c7a2d4c.json
c26968f68714ae3b753593e66c7a2d4c.tar.gz
archived/archive_1.tar.gz
05da13b36f4552d8950a1fc2282e4c2a.json
05da13b36f4552d8950a1fc2282e4c2a.tar.gz
archived/archive_1.tar.gz
dacc0957d62bbf0b236242f586e34250.json
dacc0957d62bbf0b236242f586e34250.tar.gz
archived/archive_1.tar.gz
9f4aa5283477144a0d0cf869e8d72889.json
9f4aa5283477144a0d0cf869e8d72889.tar.gz
archived/archive_1.tar.gz
833e9e718ff392ec5c7d7fd00a050c36.json
833e9e718ff392ec5c7d7fd00a050c36.tar.gz
archived/archive_1.tar.gz
edf45bed20966df7915e58611489b673.json
edf45bed20966df7915e58611489b673.tar.gz
archived/archive_1.tar.gz
8084a48ebfff5e9046536d5ec52d332d.json
8084a48ebfff5e9046536d5ec52d332d.tar.gz
archived/archive_1.tar.gz
d5a14627ffa5ac99e92e5015fd44796d.json
d5a14627ffa5ac99e92e5015fd44796d.tar.gz
archived/archive_1.tar.gz
3219ffd8661622e8c83c7ceec7993204.json
3219ffd8661622e8c83c7ceec7993204.tar.gz
archived/archive_1.tar.gz
e62c8770a7a82dfd0ce87bb98f711e45.json
e62c8770a7a82dfd0ce87bb98f711e45.tar.gz
archived/archive_1.tar.gz
d9507fd2a1dabdde1e3fd7897483c38f.json
d9507fd2a1dabdde1e3fd7897483c38f.tar.gz
archived/archive_1.tar.gz
b00aea7a22b1f54c6cfb728ea20d9048.json
b00aea7a22b1f54c6cfb728ea20d9048.tar.gz
archived/archive_1.tar.gz
0ed991e0a4e25ed19987f0cc9210b426.json
0ed991e0a4e25ed19987f0cc9210b426.tar.gz
archived/archive_1.tar.gz
a2a97f6e8f5b7f5074a41dbf344a2f73.json
a2a97f6e8f5b7f5074a41dbf344a2f73.tar.gz
archived/archive_1.tar.gz
cd918faa361604bc1900dba67bea3c49.json
cd918faa361604bc1900dba67bea3c49.tar.gz
archived/archive_1.tar.gz
185abbf195c85cb50f2e2f3698494035.json
185abbf195c85cb50f2e2f3698494035.tar.gz
archived/archive_1.tar.gz
ae8c28f0db94657ba95ea66e19b4b47f.json
ae8c28f0db94657ba95ea66e19b4b47f.tar.gz
archived/archive_1.tar.gz
da79d919753212c4a0b6b3f375126a39.json
da79d919753212c4a0b6b3f375126a39.tar.gz
archived/archive_1.tar.gz
84cad3c75f3f407feb52f8efb5c4a7eb.json
84cad3c75f3f407feb52f8efb5c4a7eb.tar.gz
archived/archive_1.tar.gz
61b85da579ad57d1b756d65f6016292f.json
61b85da579ad57d1b756d65f6016292f.tar.gz
archived/archive_1.tar.gz
292067c07a79c665aceef846ceef1a69.json
292067c07a79c665aceef846ceef1a69.tar.gz
archived/archive_1.tar.gz
2508f3f87a41a29e178c2d515d9d0013.json
2508f3f87a41a29e178c2d515d9d0013.tar.gz
archived/archive_1.tar.gz
db3435a10f0a7fbd6291b05decfba592.json
db3435a10f0a7fbd6291b05decfba592.tar.gz
archived/archive_1.tar.gz
28150676abe76b0800e9a657c816f8ea.json
28150676abe76b0800e9a657c816f8ea.tar.gz
archived/archive_1.tar.gz
3bdfe2f8b21e2434dc6258e8e7ae38be.json
3bdfe2f8b21e2434dc6258e8e7ae38be.tar.gz
archived/archive_1.tar.gz
f6d8bf0745d1fb17117a04ab71839702.json
f6d8bf0745d1fb17117a04ab71839702.tar.gz
archived/archive_1.tar.gz
9f747c2c9350eb5ee9df92af5c71cab0.json
9f747c2c9350eb5ee9df92af5c71cab0.tar.gz
archived/archive_1.tar.gz
9b2d1418ee1af29776395ac2df174d2e.json
9b2d1418ee1af29776395ac2df174d2e.tar.gz
archived/archive_1.tar.gz
ceaf187db294f8a933fe3ad3a817e3ff.json
ceaf187db294f8a933fe3ad3a817e3ff.tar.gz
archived/archive_1.tar.gz
4374cec42bb64528a3cb10af8adce81f.json
4374cec42bb64528a3cb10af8adce81f.tar.gz
archived/archive_1.tar.gz
e26a3bf49ed55e5953941ef0a49d31bc.json
e26a3bf49ed55e5953941ef0a49d31bc.tar.gz
archived/archive_1.tar.gz
e79aed40917bd7b3ab43cf32f738c88d.json
e79aed40917bd7b3ab43cf32f738c88d.tar.gz
archived/archive_1.tar.gz
27fc52564c8f3aac49a1ded5c450c255.json
27fc52564c8f3aac49a1ded5c450c255.tar.gz
archived/archive_1.tar.gz
719d75c14ea38e2b59bd99b440ccc061.json
719d75c14ea38e2b59bd99b440ccc061.tar.gz
archived/archive_1.tar.gz
534e23fcd3f35736da75328245c761c8.json
534e23fcd3f35736da75328245c761c8.tar.gz
archived/archive_1.tar.gz
99b3695d2b1e340b8efd8f0c8212ff55.json
99b3695d2b1e340b8efd8f0c8212ff55.tar.gz
archived/archive_1.tar.gz
f2f40f2151e0fd30f1f868836e50279b.json
f2f40f2151e0fd30f1f868836e50279b.tar.gz
archived/archive_1.tar.gz
c8b43e229f04b218773593f2a9ee42e6.json
c8b43e229f04b218773593f2a9ee42e6.tar.gz
archived/archive_1.tar.gz
e4dd16eef182c240ed9fa83b8418bb5d.json
e4dd16eef182c240ed9fa83b8418bb5d.tar.gz
archived/archive_1.tar.gz
dd45999e8410dbd0e67c40f31cd5955a.json
dd45999e8410dbd0e67c40f31cd5955a.tar.gz
archived/archive_1.tar.gz
2fd49e43307d1e97e59c512483dbb821.json
2fd49e43307d1e97e59c512483dbb821.tar.gz
archived/archive_1.tar.gz
129c32df20b34c6ccc3f009e7b092ba6.json
129c32df20b34c6ccc3f009e7b092ba6.tar.gz
archived/archive_1.tar.gz
ae7a6579cad57eb5a2308f2aeeee7e59.json
ae7a6579cad57eb5a2308f2aeeee7e59.tar.gz
archived/archive_1.tar.gz
8daacd710a3ede039ce2baa571befb89.json
8daacd710a3ede039ce2baa571befb89.tar.gz
archived/archive_1.tar.gz
4617b43ecf0a73ed1c1303eabb7161c4.json
4617b43ecf0a73ed1c1303eabb7161c4.tar.gz
archived/archive_1.tar.gz
0ee4c8dcd8015f1b089d2ca174605fab.json
0ee4c8dcd8015f1b089d2ca174605fab.tar.gz
archived/archive_1.tar.gz
e2b68e581a64455f05c011bf2cb4fa43.json
e2b68e581a64455f05c011bf2cb4fa43.tar.gz
archived/archive_1.tar.gz
2b8cd655c4b32c156ea3a3f4d0ebf158.json
2b8cd655c4b32c156ea3a3f4d0ebf158.tar.gz
archived/archive_1.tar.gz
b46ba552ba805cad9fe028c9b2beb67b.json
b46ba552ba805cad9fe028c9b2beb67b.tar.gz
archived/archive_1.tar.gz
be5fb2d49addf71b553615d24a909f35.json
be5fb2d49addf71b553615d24a909f35.tar.gz
archived/archive_1.tar.gz
ef5a1ee2a6b01fb277af5b139393c944.json
ef5a1ee2a6b01fb277af5b139393c944.tar.gz
archived/archive_1.tar.gz
bf1e971cea70fb53fdca74ba81b9cf14.json
bf1e971cea70fb53fdca74ba81b9cf14.tar.gz
archived/archive_1.tar.gz
886cb484cdf7946c4a3db1317b32ad73.json
886cb484cdf7946c4a3db1317b32ad73.tar.gz
archived/archive_1.tar.gz
a5f65f2ff96cfb886f347f5c291f0922.json
a5f65f2ff96cfb886f347f5c291f0922.tar.gz
archived/archive_1.tar.gz
57ac7ae19952fd2f9768cf88bec4b88b.json
57ac7ae19952fd2f9768cf88bec4b88b.tar.gz
archived/archive_1.tar.gz
e76bf57773bcf57b1f19da562e9ff10b.json
e76bf57773bcf57b1f19da562e9ff10b.tar.gz
archived/archive_1.tar.gz
36781015570b9033a4256c6751954dfa.json
36781015570b9033a4256c6751954dfa.tar.gz
archived/archive_1.tar.gz
06c168b45dc8eaa94acbd977714bcbbe.json
06c168b45dc8eaa94acbd977714bcbbe.tar.gz
archived/archive_1.tar.gz
c63e2a6582295344fbe9e1c6ece8e0a0.json
c63e2a6582295344fbe9e1c6ece8e0a0.tar.gz
archived/archive_1.tar.gz
466d99f5bafd16bc2656bb493e26d3a3.json
466d99f5bafd16bc2656bb493e26d3a3.tar.gz
archived/archive_1.tar.gz
588ffecb80d9086fd6b28a46e7245ec6.json
588ffecb80d9086fd6b28a46e7245ec6.tar.gz
archived/archive_1.tar.gz
03fc07af2bd5b7474187b60987f4ddb0.json
03fc07af2bd5b7474187b60987f4ddb0.tar.gz
archived/archive_1.tar.gz
487280a0e2a06da5d10cef9adbd5fbb4.json
487280a0e2a06da5d10cef9adbd5fbb4.tar.gz
archived/archive_1.tar.gz
4cc05a86f52259384806d890167e96cd.json
4cc05a86f52259384806d890167e96cd.tar.gz
archived/archive_1.tar.gz
4ba360cd9b8548b838783dd6fc77f454.json
4ba360cd9b8548b838783dd6fc77f454.tar.gz
archived/archive_1.tar.gz
bb29e5be8326d4c433c189c47376993f.json
bb29e5be8326d4c433c189c47376993f.tar.gz
archived/archive_1.tar.gz
2084ff9e1c3f7347ba0aea77dc91149a.json
2084ff9e1c3f7347ba0aea77dc91149a.tar.gz
archived/archive_1.tar.gz
c55263e3e3953c77d1fd3407583cc1f3.json
c55263e3e3953c77d1fd3407583cc1f3.tar.gz
archived/archive_1.tar.gz
eea79f2ee33e9eba8e086cce0a1f286d.json
eea79f2ee33e9eba8e086cce0a1f286d.tar.gz
archived/archive_1.tar.gz
037fc21461f5f182fb4b02c9497faae1.json
037fc21461f5f182fb4b02c9497faae1.tar.gz
archived/archive_1.tar.gz
ed8331280ac6ef5ef8886e15cb23dbff.json
ed8331280ac6ef5ef8886e15cb23dbff.tar.gz
archived/archive_1.tar.gz
7d1c401b976d19f7b51a7ff1e36b7278.json
7d1c401b976d19f7b51a7ff1e36b7278.tar.gz
archived/archive_1.tar.gz
2ed023f9ef036f1cf6698c523844b9e5.json
2ed023f9ef036f1cf6698c523844b9e5.tar.gz
archived/archive_1.tar.gz
72287c9cac676de50d6937c1591c013c.json
72287c9cac676de50d6937c1591c013c.tar.gz
archived/archive_1.tar.gz
5bcec0245a7b837699d80ac2e4b6f46a.json
5bcec0245a7b837699d80ac2e4b6f46a.tar.gz
archived/archive_1.tar.gz
216f00238b09a3abf19c0eea1e8cf8c6.json
216f00238b09a3abf19c0eea1e8cf8c6.tar.gz
archived/archive_1.tar.gz
69644e7bf422cc30f106f262970947ed.json
69644e7bf422cc30f106f262970947ed.tar.gz
archived/archive_1.tar.gz
43fbd5b04590ddddae4cdaaca9e478e5.json
43fbd5b04590ddddae4cdaaca9e478e5.tar.gz
archived/archive_1.tar.gz
73a1653df0222f92713f59032b839744.json
73a1653df0222f92713f59032b839744.tar.gz
archived/archive_1.tar.gz
2a81624c7292aae6935a31f2d8fe054d.json
2a81624c7292aae6935a31f2d8fe054d.tar.gz
archived/archive_1.tar.gz
0a71f622badcce1444e4220a7e8acd51.json
0a71f622badcce1444e4220a7e8acd51.tar.gz
archived/archive_1.tar.gz
End of preview.

Dataset Card for wa-hls4ml Benchmark Dataset

The wa-hls4ml projects dataset, comprized of the Vivado/Vitis projects of neural networks converted into HLS Code via hls4ml. Projects are complete, and include all logs, HLS Code, VHDL Code, Intermediete Representations, and source keras models.

This is a companion dataset to the wa-hls4ml dataset. There is a reference CSV for each model type that contains a reference to each individual model name, the artifacts file for that model, and which batch archive file contains that project archive for that model.

PLEASE NOTE: The dataset is currently incomplete, and is in the process of being cleaned and uploaded.

Dataset Details

We introduce wa-hls4ml[^1]: a dataset unprecedented in scale and features and a benchmark for common evaluation

The open dataset is unprecedented in terms of its size, with over 680,000 fully synthesized dataflow models. The goal is to continue to grow and extend the dataset over time. We include all steps of the synthesis chain from ML model to HLS representation to register-transfer level (RTL) and save the full logs. This will enable a much broader set of applications beyond those in this paper.
The benchmark standardizes evaluation of the performance of resource usage and latency estimators across a suite of metrics, such as the coefficient of determination (R^2), symmetric mean absolute percentage error (SMAPE), and root mean square error (RMSE), and provides sample models, both synthetic and from scientific applications, to support and encourage the continued development of better surrogate models.

[^1]: Named after Wario and Waluigi who are doppelgΓ€ngers of Mario and Luigi, respectively, in the Nintendo Super Mario platform game series.

Dataset Description

The dataset has two primary components, each designed to test different aspects of a surrogate model's performance.
The first part is based on synthetic neural networks generated with various layer types, micro-architectures, and precisions.
This synthetic dataset lets us systematically explore the FPGA resources and latencies as we vary different model parameters.
The second part of the benchmark targets models from exemplar realistic scientific applications, requiring real-time processing at the edge, near the data sources.
Models with real-time constraints constitute a primary use case for ML-to-FPGA pipelines like hls4ml.
This part tests the ability of the surrogate model to extrapolate its predictions to new configurations and architectures beyond the training set, assessing the model's robustness and performance for real applications.

Exemplar Model Descriptions

  • Jet: A fully connected neural network that classifies simulated particle jets originating from one of five particle classes in high-energy physics experiments.
  • Top Quarks: A binary classifier for top quark jets, helping probe fundamental particles and their interactions.
  • Anomaly: An autoencoder trained on audio data to reproduce the input spectrogram, whose loss value differentiates between normal and abnormal signals.
  • BiPC: An encoder that transforms high-resolution images, producing sparse codes for further compression.
  • CookieBox: Dedicated to real-time data acquisition for the CookieBox system, designed for advanced experimental setups requiring rapid handling of large data volumes generated by high-speed detectors.
  • AutoMLP: A fully connected network from the AutoMLP framework, focusing on accelerating MLPs on FPGAs, providing significant improvements in computational performance and energy efficiency.
  • Particle Tracking: Tracks charged particles in real-time as they traverse silicon detectors in large-scale particle physics experiments.

Exemplar Model Architectures

Model Size Input Architecture
Jet 2,821 16 β†’[ReLU]32 β†’[ReLU]32 β†’[ReLU]32 β†’[Softmax]5
Top Quarks 385 10 β†’[ReLU]32 β†’[Sigmoid]1
Anomaly 2,864 128 β†’[ReLU]8 β†’[ReLU]4 β†’[ReLU]128 β†’[ReLU]4 β†’[Softmax]128
BiPC 7,776 36 β†’[ReLU]36 β†’[ReLU]36 β†’[ReLU]36 β†’[ReLU]36 β†’[ReLU]36
CookieBox 3,433 512 β†’[ReLU]4 β†’[ReLU]32 β†’[ReLU]32 β†’[Softmax]5
AutoMLP 534 7 β†’[ReLU]12 β†’[ReLU]16 β†’[ReLU]12 β†’[Softmax]2
Particle Tracking 2,691 14 β†’[ReLU]32 β†’[ReLU]32 β†’[ReLU]32 β†’[Softmax]3
  • Curated by: Fast Machine Learning Lab
  • Funded by: See "Acknowledgements" in the paper for full funding details
  • Language(s) (NLP): English
  • License: cc-by-nc-4.0

Dataset Sources

The Dataset was consists of data generated by the authors using the following methods:

Generation of Synthetic Data

The train, validation, and test sets were created by first generating models of varying architectures in the Keras and QKeras Python libraries, varying their hyperparameters.
The updated rule4ml dataset follows the same generation method and hyperparameter ranges described in prior work, while adding II information and logic synthesis results to the reports.

For the remaining subsets of the data, the two-layer and three-layer fully-connected models were generated using a grid search method according to the parameter ranges mentioned below, whereas larger fully-connected models and convolutional models (one- and two-dimensional) were randomly generated, where convolutional models also contain dense, flatten, and pooling layers.
The weight and bias precision was implemented in HLS as datatype ap_fixed<X,1>, where X is the specified precision and the total number of bits allocated to the weight and bias values, with one bit being reserved for the integer portion of the value.
These models were then converted to HLS using hls4ml and synthesized through AMD Vitis version 2023.2 and 2024.2, targeting the AMD Xilinx Alveo U250 FPGA board.
The model sets have the following parameter ranges:

  • Number of layers: 2–7 for fully-connected models; 3–7 for convolutional models.
  • Activation functions: Linear for most 2–3 layer fully-connected models; ReLU, tanh, and sigmoid for all other fully-connected models and convolutional models.
  • Number of features/neurons: 8–128 (step size: 8 for 2–3 layer) for fully-connected models; 32–128 for convolution models with 8–64 filters.
  • Weight and bias bit precision: 2–16 bits (step size: 2) for 2–3 layer fully-connected models, 4–16 bits (step size: powers of 2) for 3–7 layer fully-connected and convolutional models.
  • hls4ml target reuse factor: 1–4093 for fully-connected models; 8192–32795 for convolutional models.
  • hls4ml implementation strategy: Resource strategy, which controls the degree of parallelism by explicitly specifying the number of MAC operations performed in parallel per clock cycle, is used for most fully-connected models and all convolutional models, while Latency strategy, where the computation is unrolled, is used for some 3–7 layer fully-connected models.
  • hls4ml I/O type: The io_parallel setting, which directly wires the output of one layer to the input of the next layer, is used for all fully-connected models, and the io_stream setting, which uses FIFO buffers between layers, is used for all convolutional models.

Exemplar Model Synthesis Parameters

The exemplar models were synthesized with the following parameters:

Hyperparameter Values
Precision ap_fixed<2,1>, ap_fixed<8,3>, ap_fixed<16,6>
Strategy Latency, Resource
Target reuse factor 1, 128, 1024
Target board Alveo U200, Alveo U250
Target clock 5 ns, 10 ns
Vivado version 2019.1, 2020.1

The synthesis was repeated multiple times, varying the hls4ml reuse factor, a tunable setting that proportionally limits the number of multiplication operations used.
The hls4ml conversion, HLS synthesis, and logic synthesis of the train and test sets were all performed in parallel on the National Research Platform Kubernetes Hypercluster and the Texas A&M ACES HPRC Cluster.
On the National Research Platform, synthesis was run inside a container with a guest OS of Ubuntu 20.04.4 LTS, the containers being slightly modified versions of the xilinx-docker v2023.2 "user" images, with 3 virtual CPU cores and 16 GB of RAM per pod, with all AMD tools mounted through a Ceph-based persistent volume.
Jobs run on the Texas A&M ACES HPRC Cluster were run using Vitis 2024.2, each with 2 virtual CPU cores and 32 GB of RAM.

The resulting projects, reports, logs, and a JSON file containing the resource/latency usage and estimates of the C and logic synthesis were collected for each sample in the dataset.
The data, excluding the projects and logs, were then further processed into a collection of JSON files, distributed alongside this paper and described below.


Uses

This dataset is intended to be used to train/refine LLMs to better generate HLS and VHDL code, along with improving the understanding of the general C- and Logic-Synthesis processes to better assist in debugging and question answering for FPGA tooling and hls4ml.

Direct Use

This dataset is generated using the tool hls4ml, and should be used to train LLMs and/or other models for HLS/VHDL Code generation along with improving qustion answering and understanding of (currently) Vivado/Vitis and hls4ml workflows.

Out-of-Scope Use

As this dataset is generated using the hls4ml and Vivado/Vitis tools, it should not be used to train LLMs and/or other models for other tools, as results and implementation details may vary across those tools compared to hls4ml and Vivado/Vitis.


Dataset Structure

Within each subset, excluding the exemplar test set, the data is grouped as follows.

  • 2_20 (rule4ml): The updated rule4ml dataset, containing fully-connected neural networks that were randomly generated with layer counts between 2 and 20 layers, using hls4ml resource and latency strategies.
  • 2_layer: A subset containing 2-layer deep fully-connected neural networks generated via a grid search using hls4ml resource and io_parallel strategies.
  • 3_layer: A subset containing 3-layer deep fully-connected neural networks generated via a grid search using hls4ml resource and io_parallel strategies.
  • conv1d: A subset containing 3–7 layer deep 1-dimensional convolutional neural networks that were randomly generated and use hls4ml resource and io_stream strategies.
  • conv2d: A subset containing 3–7 layer deep 2-dimensional convolutional neural networks that were randomly generated and use hls4ml resource and io_stream strategies.
  • latency: A subset containing 3–7 layer deep fully-connected neural networks that were randomly generated and use hls4ml latency and io_parallel strategies.
  • resource: A subset containing 3–7 layer deep fully-connected neural networks that were randomly generated and use hls4ml resource and io_parallel strategies.

Structure of the CSV Index files

There is one CSV index file for each model type split. Each file has 3 fields:

  • Model Name: The name of the model that you can use to reference the corresponding JSON file in the wa-hls4ml dataset. Note: you will need to split the string at the last _ character to find the corresponding JSON file. The string to the left of the _ is the source model name, and the string to the right is the target reuse factor for that specific project.
  • Artifacts File: The name of the specific artifacts file that contains the project for the specified model.
  • Archive Name: The name of the archive that contains the specific artifacts file for the specified model.

Structure of the project archives

Due to file size/count limitations, the individual project archives are split into batches and placed into one of a number of tar.gz files. If you are looking to find a specific project file, please refer to the Index CSV file as mentioned above.

Each project archive contains the complete Vivado/Vitis project in it's original structure, including resulting HLS and VHDL Code, logs, reports, intermediete representations, and source keras model file.


Curation Rationale

With the introduction of ML into FPGA toolchains, e.g., for resource and latency prediction or code generation, there is a significant need for large datasets to support and train these tools.
We found that existing datasets were insufficient for these needs, and therefore sought to build a dataset and a highly scalable data generation framework that is useful for a wide variety of research surrounding ML on FPGAs.
This dataset serves as one of the few openly accessible, large-scale collections of synthesized neural networks available for ML research.

Exemplar Realistic Models

The exemplar models utilized in this study include several key architectures, each tailored for specific ML tasks and targeting scientific applications with low-latency constraints.

Source Data

The data was generated via randomly generated neural networks and specifically selected exemplar models, converted into HLS Code via hls4ml, with the resulting latency values collected after performing C-Synthesis through Vivado/Vitis HLS on the resulting HLS Code, and resource values collected after performing logic synthesis through Vivado/Vitis on the resulting HDL Code. The projects were then stored in a tar.gz file and distributed in this dataset.

Who are the source data producers?

Benjamin Hawks, Fermi National Accelerator Laboratory, USA

Hamza Ezzaoui Rahali, University of Sherbrooke, Canada

Mohammad Mehdi Rahimifar, University of Sherbrooke, Canada

Personal and Sensitive Information

This data contains no personally identifiable or sensitive information except for the names/usernames of the authors in some file paths.

Bias, Risks, and Limitations

In it's inital form, a majority of this dataset is comprised of very small (2-3 layer) dense neural networks without activations. This should be considered when training a model on it, and appropriate measures should be taken to weight the data at training time. We intend to continuously update this dataset, addressing this imbalance over time as more data is generated.

Recommendations

Appropriate measures should be taken to weight the data to account for the dataset imbalance at training time.

Citation

Paper currently in review.

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Dataset Card Authors

Benjamin Hawks, Fermi National Accelerator Laboratory, USA

Hamza Ezzaoui Rahali, University of Sherbrooke, Canada

Mohammad Mehdi Rahimifar, University of Sherbrooke, Canada

Dataset Card Contact

[email protected]

Downloads last month
308