[SOLVED] What is the correct way to download CT-RATE v2?
Hello, I am trying to download the dataset and tried several methods:
- Using scripts from
https://github.com/sezginerr/example_download_script
:
- using
snapshot_download.py
doesn't work, as I'm immediately getting:Fetching 0 files: 0it [00:00, ?it/s]
and the script exits. - using
dowload_only_valid_data.py
will download thevalid
folder and notvalid_fixed
. Same for train. - No script to download the TotalSegmentor segmentations.
Last commit to the repo was last year, and v2 was some months ago so I presume the script doesn't support v2.
Also tried the command in
https://huggingface.co/datasets/ibrahimhamamci/CT-RATE/discussions/88
(which ishuggingface-cli download ...
) but got the same output assnapshot_download.py
.I currently am downloading using
git clone [email protected]:datasets/ibrahimhamamci/CT-RATE
as the only solution that downloads, but I know that this is not a recommended way. The commands output is:
Cloning into 'CT-RATE'...
remote: Enumerating objects: 485822, done.
remote: Counting objects: 100% (3/3), done.
remote: Compressing objects: 100% (3/3), done.
remote: Total 485822 (delta 0), reused 0 (delta 0), pack-reused 485819 (from 1)
Receiving objects: 100% (485822/485822), 60.97 MiB | 10.33 MiB/s, done.
Resolving deltas: 100% (169/169), done.
Updating files: 100% (251051/251051), done.
Filtering content: 9% (23862/250996), 5309.30 GiB | 32.92 MiB/s
But that seems like the dataset should be about 50TB? Is that correct? I have seen some discussions saying that v1 is about 11TB.
I would be happy to not download train/valid from v1 if I only need the fixed versions if there is a way.
I'd love to hear about the correct way to download v2, using a restart-safe way like hf-cli.
Thank you very much, and thank you for your contribution!
Note: I was able to download CT-RATE v1 BEFORE the v2 update (unable now as it exits) using snapshot_download.py
from the example_download_script
repo, just changed the script to have auto-restarts:
import time
from huggingface_hub import snapshot_download
from requests import ConnectionError
# Code taken from issue: https://github.com/huggingface/huggingface_hub/issues/1542
while True:
try:
snapshot_path = snapshot_download(repo_id="ibrahimhamamci/CT-RATE", repo_type="dataset", local_dir="./dataset")
break
except Exception as e:
print(f"ConnectionError: {e}. Retrying in 1s...")
time.sleep(1)
same here, i tried any of the download script but got fetch 0 file (download_only_train, download_only_val, snapshot_download)
I'm not sure about the snapshot_download.py
issue, but I downloaded v1 several months ago using the adapted version of the download_only_train_data.py
, https://huggingface.co/datasets/ibrahimhamamci/CT-RATE/discussions/64 (farrell236 comment half way down). Just changing the directory to directory_name = f'dataset/{split}_fixed/'
should work so you don't get the v1 download again. The v2 fixed version seems to be downloading properly for me currently this way. I see the script I used here: https://huggingface.co/datasets/ibrahimhamamci/CT-RATE/blob/main/download_scripts/download_dataset.py
What is the advantage of downloading using the sezginerr/example_download_script
download scripts compared to git clone
or huggingface-cli download
?
updating this discussion - I updated the download_dataset.py
script to download train_fixed
and the segmentations (from TotalSegmentor only - ts_total
, but can be used for all segs).
I put both scripts in a repo: https://github.com/yinon-gold/CT-RATE-download-scripts, planning to put a pull request to sezginerr/example_download_script
.
The download_dataset.py
is a small change from https://huggingface.co/datasets/ibrahimhamamci/CT-RATE/blob/main/download_scripts/download_dataset.py
.
This is the script I used to download the TotalSegmentor segmentations, available in the repo as download_segmentations.py
:
import shutil
import os
import pandas as pd
import sys
from huggingface_hub import hf_hub_download
from tqdm import tqdm
split = 'train_fixed'
batch_size = 100
start_at = 0
repo_id = 'ibrahimhamamci/CT-RATE'
directory_name = f'dataset/{split}/'
hf_token = os.getenv("HF_TOKEN")
data = pd.read_csv(f'{split.split("_")[0]}_labels.csv') # changed
for i in tqdm(range(start_at, len(data), batch_size)):
data_batched = data[i:i+batch_size]
for name in data_batched['VolumeName']:
folder1 = name.split('_')[0]
folder2 = name.split('_')[1]
folder = folder1 + '_' + folder2
folder3 = name.split('_')[2]
subfolder = folder + '_' + folder3
subfolder = directory_name + folder + '/' + subfolder
ll = subfolder.split('/') # added
seg_folder = os.path.join(ll[0], 'ts_seg', 'ts_total', *ll[1:]) # added
try:
hf_hub_download(repo_id=repo_id,
repo_type='dataset',
token=hf_token,
subfolder=seg_folder,
filename=name,
local_dir='<PATH_TO_CT-RATE>',
resume_download=True,
)
except Exception as e:
# append to file called errors.txt
with open('errors.txt', 'a') as f:
f.write(f"{e} :: {name=} {seg_folder=}\n")
# print to stderr
print(f"{e} :: {name=} {seg_folder=}", file=sys.stderr)
try:
# shutil.rmtree('.cache/huggingface/download/dataset/') # note that a .cache folder is created and should be deleted
continue
except Exception as e:
print(f"Error removing cache directory: {e}", file=sys.stderr)