Dataset Viewer
Auto-converted to Parquet
filename
stringlengths
19
26
hr
sequencelengths
720
720
lr
sequencelengths
720
720
HR_AD_F3_133_100.npy
[[0.02564102183188728,0.03613001803170253,0.038615908330904766,0.030901083398930086,0.03323816679525(...TRUNCATED)
[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED)
HR_AD_F3_133_101.npy
[[0.037573083438063495,0.048032628321627344,0.054410643012270915,0.04527850600772661,0.0489574982749(...TRUNCATED)
[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED)
HR_AD_F3_133_102.npy
[[0.03234155847332452,0.043067648861579204,0.03714437825539419,0.02599754111657055,0.029313655842387(...TRUNCATED)
[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED)
HR_AD_F3_133_103.npy
[[0.03082185658021316,0.030752407307449107,0.022397221773141032,0.02212925833612683,0.03493391201328(...TRUNCATED)
[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED)
HR_AD_F3_133_104.npy
[[0.03454449174773359,0.04128100135400568,0.03790916537076491,0.033604614797444844,0.036408034039543(...TRUNCATED)
[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED)
HR_AD_F3_133_105.npy
[[0.025840343895483606,0.028334480240202548,0.02706441836111165,0.03293207196783479,0.02868066215793(...TRUNCATED)
[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED)
HR_AD_F3_133_106.npy
[[0.029757274634560394,0.013413043218480776,0.016397157560934548,0.024098867157501827,0.027979816266(...TRUNCATED)
[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED)
HR_AD_F3_133_107.npy
[[0.02108934469391755,0.018298468364660432,0.022048971513767186,0.021884621318848496,0.0227607752790(...TRUNCATED)
[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED)
HR_AD_F3_133_108.npy
[[0.030464827907021746,0.029658429650958527,0.024329349910823386,0.014808174248945342,0.035071702283(...TRUNCATED)
[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED)
HR_AD_F3_133_109.npy
[[0.032270168361581,0.03560487100402844,0.024610914463288232,0.016451804292056713,0.0224506152087460(...TRUNCATED)
[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED)
End of preview. Expand in Data Studio

Dataset Card for SRMRI

SRMRI is a curated MRI super-resolution collection of 2D slice pairs and high-resolution volumes, designed for both unsupervised and supervised learning.

Dataset Details

Dataset Description

Abstract

Existing deep learning methods for medical image super-resolution (SR) often rely on paired datasets generated by simulating low-resolution (LR) images from corresponding high-resolution (HR) scans, which can introduce biases and degrade real-world performance. To overcome these limitations, we present an unsupervised approach based on a score-based diffusion model that does not require paired training data. We train a score-based diffusion model using denoising score matching on HR Magnetic Resonance Imaging (MRI) scans, then perform iterative refinement with a stochastic differential equation (SDE) solver while enforcing data consistency from LR scans. Our method provides faster sampling compared to existing generative approaches and achieves competitive results on key metrics, though it does not surpass fully supervised baselines in PSNR and SSIM. Notably, while supervised models often report higher numerical metrics, we observe that they can produce suboptimal reconstructions due to their reliance on fixed upscaling kernels. Finally, we introduce the SRMRI dataset, containing LR and HR images obtained from scanner for training and evaluating MR image super-resolution models.

Proceedings of Machine Learning Research – 28:1–16, 2025 Accepted at MIDL 2025

Dataset preparation

Acquisition and Pairing

  • Scanner details: 9.4 T GRE sequence, 25 µm (HR: 720×512×304) & 50 µm (LR: 360×256×152) volumes.
  • Slice selection: Removed noisy slices → ~3 000 train slices
  • Pairing method:
    1. Downsample HR to LR dims.
    2. Compute PCA, LPIPS, SSIM distances.
    3. Voting scheme; visual inspection if no consensus.

Preprocessing

  • Converted NIfTI to numpy.
  • Normalized intensities to [0,1].

Uses

Splits & Format

All files are stored as float32 NumPy arrays normalized to [0, 1]. Each 2D slice has shape HR: 720 × 512, LR: 360 × 256.

Split Contents Purpose
train_unsupervised - hr (720×512) HR slices
- lr zeros (360×256) for API compatibility
Train unsupervised diffusion model on HR scans
train_supervised - hr (720×512) HR slices
- lr (360×256) LR scanner slices
Train supervised SR models
evaluate - hr (720×512) HR slices
- lr (360×256) LR scanner slices
Validation & testing

Example Usage

from datasets import load_dataset
import torch

# Load the SRMRI dataset
ds = load_dataset("arpanpoudel/SRMRI")

# Inspect available splits
print(ds)
# DatasetDict({
#   train_unsupervised: Dataset({...}),
#   train_supervised:   Dataset({...}),
#   evaluate:           Dataset({...})
# })

# Get one supervised example
sample = ds["train_supervised"][0]
print(sample["filename"])         # e.g. "AD_F11_90_slice_1"
print(sample["lr"].shape, 
      sample["hr"].shape)         # (360, 256), (720, 512)

# Create a PyTorch DataLoader
def collate_fn(batch):
    lr = torch.stack([torch.from_numpy(x["lr"]) for x in batch]).unsqueeze(1)
    hr = torch.stack([torch.from_numpy(x["hr"]) for x in batch]).unsqueeze(1)
    return {"lr": lr, "hr": hr}

loader = torch.utils.data.DataLoader(
    ds["train_supervised"], 
    batch_size=8, 
    collate_fn=collate_fn
)

for batch in loader:
    print(batch["lr"].shape, batch["hr"].shape)
    # -> torch.Size([8, 1, 360, 256]), torch.Size([8, 1, 720, 512])
    break

If you use this dataset, cite:

@inproceedings{poudel2025srmri,
  title     = {SRMRI: A Diffusion-Based Super-Resolution Framework and Open Dataset for Blind MRI Super-Resolution},
  author    = {Poudel, Arpan and Shrestha, Mamata and Wang, Nian and Nakarmi, Ukash},
  booktitle = {Proceedings of Machine Learning Research},
  series    = {MIDL},
  pages     = {28:1--16},
  year      = {2025},
  url       = {https://github.com/arpanpoudel/SRMRI}
}
Downloads last month
184