Dataset Viewer
Full Screen
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    UnicodeDecodeError
Message:      'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 231, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2998, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1918, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2093, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1576, in __iter__
                  for key_example in islice(self.ex_iterable, self.n - ex_iterable_num_taken):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 279, in __iter__
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/text/text.py", line 73, in _generate_tables
                  batch = f.read(self.config.chunksize)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 826, in read_with_retries
                  out = read(*args, **kwargs)
                File "/usr/local/lib/python3.9/codecs.py", line 322, in decode
                  (result, consumed) = self._buffer_decode(data, self.errors, final)
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

BHI100 SISR Validation Set

A sisr validation set for iqa metrics, consisting of one hundred 480x480px HR images, with corresponding x2 (240x240px), x3 (160x160px) and x4 (120x120px) bicubic downsamples.

BHI100 Validation Set  Visual Overview
BHI100 Validation Set Visual Overview

Background

Sisr iqa metric sets commonly used in papers incluse Set5, Set15, BSD100, Urban100 and Manga109.
Of these, when working on my latest pretrains like the SRVGGNet one, 2xBHI_small_compact_pretrain or the RealPLKSR one 2xBHI_small_realplksr_dysample_pretrain I was using Urban100 for validation to have reference points.

But what bothered me is its non-uniformity in regards to image dimensions. The img004.png HR file (from the benchmark.zip file on the DAT repo) being 1024x681px, which is not divisible by 2 nor 4. This can lead to problems when downscaling. I was not able to match the official x4 of that image, neither with pillow bicubic nor with mitchell nor with any other downsampling algorithm. Seems like matlab bicubic gives a different result than pillow bicubic in this case.

Not only downscaling becomes a mess this way, but when training sisr models I would also run into validation errors because of the weird and individual image dimensions resulting from Urban100:

Urban100 Validation Error because of image dimensions
Urban100 Validation Error because of image dimensions in my training frameworks

Tiling into uniform dimensions is something I have been doing for my training sets. This whole mess was getting on my nerves, so I decided to make this BHI100 validation set as a remedy.
First I merged together the HRs of the Set5, Set14, BSD100, Urban100, Manga109, DIV2K and LSDIR validation sets.
Then I decided on 480x480px image dimensions, because it is easily divisible by 2, 3 and 4, to create the corresponding downscaled sets without any mess. Images that had a dimension smalled than 480px were filtered out (like the full BSD100).
After I used both Lanczos downsampling (as per ImageMagicks defaults for photographs, read this article) and center cropping to produce the 480x480px HR images.
To then further filter the validation set, I used my BHI filtering method by first scoring and then filtering by blockiness < 2 (removed 106 images), hyperiqa >= 0.7 (removed 156 images), and then ic9600 to have exactly 100 images remaning which resulted in a ic9600 score > 0.6

BHI scoring / filtering
BHI scoring / filtering
# images from official sets that survived
# images from official sets that survived

Sets with 200 and 300 images were created this way, but I decided on 100 as this set size for faster validation / processing and less storage needs resulting from saved images from each validation run.
The corresponding x2, x3 and x4 bicubic downsampled sets were created with pillow. They are included in the BHI100.zip file.
Additionally I confirmed that pillow bicubic downsampling would produce the same results as what was used on Urban100 (if actually divisible image dimensions, like img14 in this case):

Pillow bicubic and official downsampled
Pillow bicubic x2 and provided Urban100 x2 produce the same result

While using Mitchell downsampling would produce slight differenced to the official provided x2 one (purple dots)

Mitchell and official downsampled
Mitchell and provided Urban100 x2 have slight differences in contrast

And lastly, I ran multiple non-reference quality metrics on my set. As a reference point also Urban100 (which has way bigger HRs in contrast)

Multiple non-reference set metrics
Multiple non-reference set metrics

Currently I started using this set to calculate FR metrics on model outputs:

FR model output metrics
FR model output metrics
Downloads last month
3