You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Summary

This dataset was produced using PyChemAuth and pyts to transform the pgaa-sample dataset into images using a Gramian Angular Difference Field. Below is an example image.

image/png

This data was first presented in Mahynski, N.A., Sheen, D.A., Paul, R.L. et al. "Encoding PGAA spectra as images for material classification with convolutional neural networks." J Radioanal Nucl Chem (2025).

Also see, the GitHub repository.

Generation

The code to reproduce this dataset from scratch is given below.

import sklearn
from pychemauth.datasets import make_pgaa_images

# First index classes from 0-9 for those with many observations.
image_size = (2631, 2631, 1)

res = make_pgaa_images(
    transformer=GramianAngularField(method='difference'), 
    exclude_classes=['Carbon Powder', 'Phosphate Rock', 'Zircaloy'], 
    directory='./2d-dataset', 
    overwrite=False, 
    fmt='npy', 
    valid_range=(0, image_size[0]), 
    renormalize=True,
    test_size=0.2,
    random_state=42
)

# Next, index the remainder as [10, 11, 12].
res_challenge = make_pgaa_images(
    transformer=GramianAngularField(method='difference'), 
    exclude_classes=[
        'Biomass', 
        'Coal and Coke', 
        'Concrete', 
        'Dolomitic Limestone',
        'Forensic Glass', 
        'Fuel Oil', 
        'Graphite/Urea Mixture',
        'Lubricating Oil', 
        'Steel', 
        'Titanium Alloy'
    ], # Exclude the ones we already trained on 
    valid_range=(0, image_size[0]), 
    renormalize=True,
    test_size=0.0,
)
X_challenge, _, y_challenge, _, _, encoder_challenge = res_challenge

# Map the y_challenge classes of [0, 1, 2] -> [10, 11, 12]
y_challenge += 10

# Split the challenge set (classes [10, 11, 12]) into test/train folds.
Xc_train, Xc_test, yc_train, yc_test = sklearn.model_selection.train_test_split(
    X_challenge, y_challenge, test_size=0.2, stratify=y_challenge, random_state=42, shuffle=True
)

# Write to disk to use as loaders
_ = utils.write_dataset(
    directory='./2d-dataset/train',
    X=Xc_train,
    y=yc_train,
    overwrite=False,
    augment=True
)

_ = utils.write_dataset(
    directory='./2d-dataset/test',
    X=Xc_test,
    y=yc_test,
    overwrite=False,
    augment=True
)

This results in the follow map of indices to material names:

Index Material Name N_train N_test
0 Biomass 25 6
1 Coal and Coke 74 19
2 Concrete 32 8
3 Dolomitic Limestone 11 3
4 Forensic Glass 18 4
5 Fuel Oil 21 5
6 Graphite/Urea Mixture 11 3
7 Lubricating Oil 31 8
8 Steel 11 3
9 Titanium Alloy 9 2
10 Carbon Powder 5 1
11 Phosphate Rock 5 2
12 Zircaloy 6 2

Original 1D Spectra

Description

The PGAA spectra were originally reported in Mahynski et al. (2023). See this publication for a full description. Briefly, rows of X are prompt-gramma ray activation analysis (PGAA) spectra for different materials. They have been normalized to sum to 1. The peaks have been binned into histograms whose centers (energy in keV) are given as the columns. From Mahynski2023:

Our dataset consists of a variety of samples of different organic and inorganic materials. [The histogram below] shows a summary of the different categories of materials used. Various SRMs and materials were selected as representative of each class, and complete descriptions of the selected materials in each category are available in the Supplemental Information (SI). For example, "steel" contains samples of various alloys, and "biomass" contains samples ranging from wood chips to plant leaves. PGAA spectra were collected as histograms. The instrument used at NIST to obtain this data collects spectra in 2^14 = 16,384 energy bins spaced evenly to cover a range of up to approximately 12 MeV. The energy value for each bin is estimated by a calibration run which produces a linear fit of bin index to energy. This means the numerical energy value of a bin can vary slightly between measurements. All spectra were aligned to 2^14 new bins evenly spaced between the global minimum and maximum energies in the dataset by linearly interpolating each spectrum at the fixed bin centers. Next, we coarsened the spectra by summing every 4 bins to produce aligned spectra with 2^12 = 4,096 total bins. Since very low energy portions of the spectra are considered unreliable, we removed the first 40 bins so that the spectra spanned from approximately 0.1 to 12 MeV using 4,056 bins. Finally, we normalized each spectrum so that the total number of counts summed to unity; while it is possible to normalize these measurements using the length of collection time, calibrated neutron flux, and the mass of the sample we found an empirical normalization to be simpler, and more consistent as it does not depend on the accurate measurement of other factors. The energy range over which spectra are collected varies. Bins beyond an individual measurement’s limit are fixed at the last measured value, creating an artifact... Detector efficiency also decreases non-linearly at higher energies, leading to lower counts. As a result, both the global mean and variance over the dataset systematically decrease in higher energy regions of the spectra. These high-energy regions contain the artifacts, which should be regarded as spurious.

The data here was obtained from github.com/mahynski/pgaa-material-authentication/data using the PyChemAuth package.

Citation for 1D Spectra

@article{Mahynski2023,
   author = {Nathan A. Mahynski and Jacob I. Monroe and David A. Sheen and Rick L. Paul and H. Heather Chen-Mayer and Vincent K. Shen},
   doi = {10.1007/s10967-023-09024-x},
   issn = {0236-5731},
   journal = {Journal of Radioanalytical and Nuclear Chemistry},
   month = {7},
   title = {Classification and authentication of materials using prompt gamma ray activation analysis},
   url = {https://link.springer.com/10.1007/s10967-023-09024-x},
   year = {2023},
}

Access

See HuggingFace documentation on loading a dataset from the hub. Briefly, you can easily access this dataset using datasets:

from datasets import load_dataset

# To use the dataset as an iterator directly:
dataset_images = load_dataset(
  "mahynski/pgaa-sample-gadf-images", 
  split="train",
  token="hf_*", # Enter your own token here
  trust_remote_code=True, # This is important to include
  name='images' # Select this configuration
)

# By default, however, just the filenames will be returned:
dataset_fnames = load_dataset(
  "mahynski/pgaa-sample-gadf-images", 
  split="train",
  token="hf_*", # Enter your own token here
  trust_remote_code=True, # This is important to include
  name='filenames' # This is the default
)

# You can convert this to an XLoader in PyChemAuth easily:
import pychemauth
from pychemauth.utils import NNTools

loader = NNTools.XLoader(
    x_files = [entry['filename'] for entry in dataset_fnames],
    y = [entry['label'] for entry in dataset_fnames],
    batch_size=10
)
Downloads last month
17

Models trained or fine-tuned on mahynski/pgaa-sample-gadf-images

Collection including mahynski/pgaa-sample-gadf-images