Datasets:
Making stream faster
I'm trying to iterate over the train_quality
subset of the dataset and get some sample images for testing. I'm running this on google colab. However, it seems that it takes a lot more time than expected. This is my code:
train = load_dataset("Harvard-Edge/Wake-Vision", split="train_quality", streaming=True)
images = []
columns = ["bright", "dark", "far", "near", "normal_lighting", "predominantly_male", "predominantly_female"]
for col in columns:
train_iter = iter(train)
sample = next(train_iter)
while sample[col] != 1:
sample = next(train_iter)
images.append(sample["image"])
fig, axes = plt.subplots(2, 3, figsize=(10, 6))
for i, image in enumerate(images):
ax = axes[i // 3, i % 3]
ax.imshow(np.array(image))
ax.axis("off")
ax.title.set_text(columns[i])
plt.show()
Is there a way to stream and iterate concurrently, or any other solution? I'm open to any advice and ideas.
Hey @adrinorosario !
We appreciate your interest in the Wake Vision Dataset!
From what I understand from your code, you want to find a collection of images from Wake Vision's fine-grained benchmarks (bright, dark, far, etc.) and showcase these. These labels only exist in the validation and test sets, so you will never find any of these labels in the train_quality set. The "poor performance" you claim probably comes down to it being impossible for you to find what you are looking for in the dataset split that you are using.
If you still need better performance after fixing the abovementioned issue, I would suggest iterating through dataset samples in your for loop rather than columns. That way, you can stop your loop once you have found one of each fine-grained label, rather than having to wait until you have found a "bright" label to start looking for a "dark" label.
Finally, if you have the memory available (which you probably don't on Google Colab), you can load the dataset to disk before using it to improve performance.