The dataset viewer is not available for this subset.
Exception: ReadTimeout
Message: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 5b56c19c-9d90-43ad-9fb3-59cc31a36ca5)')
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 268, in get_dataset_config_info
builder = load_dataset_builder(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1132, in load_dataset_builder
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1004, in dataset_module_factory
).get_module()
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 632, in get_module
data_files = DataFilesDict.from_patterns(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 689, in from_patterns
else DataFilesList.from_patterns(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 592, in from_patterns
origin_metadata = _get_origin_metadata(data_files, download_config=download_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 506, in _get_origin_metadata
return thread_map(
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/tqdm/contrib/concurrent.py", line 69, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map
return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/tqdm/std.py", line 1169, in __iter__
for obj in iterable:
^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 619, in result_iterator
yield _result_or_cancel(fs.pop())
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 317, in _result_or_cancel
return fut.result(timeout)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 59, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 485, in _get_single_origin_metadata
resolved_path = fs.resolve_path(data_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 198, in resolve_path
repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 125, in _repo_and_revision_exist
self._api.repo_info(
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_api.py", line 2816, in repo_info
return method(
^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_api.py", line 2673, in dataset_info
r = get_session().get(path, headers=headers, timeout=timeout, params=params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 96, in send
return super().send(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/requests/adapters.py", line 690, in send
raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 5b56c19c-9d90-43ad-9fb3-59cc31a36ca5)')Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Aphasia Recovery Cohort (ARC)
Multimodal neuroimaging dataset for stroke-induced aphasia research.
Dataset Summary
The Aphasia Recovery Cohort (ARC) is a large-scale, longitudinal neuroimaging dataset containing multimodal MRI scans from 230 chronic stroke patients with aphasia. This HuggingFace-hosted version provides direct Python access to the BIDS-formatted data with embedded NIfTI files.
| Metric | Count |
|---|---|
| Subjects | 230 |
| Sessions | 902 |
| T1-weighted scans | 444 sessions (447 runs)* |
| T2-weighted scans | 440 sessions (441 runs)* |
| FLAIR scans | 233 sessions (235 runs)* |
| BOLD fMRI (naming40 task) | 750 sessions (894 runs) |
| BOLD fMRI (resting state) | 498 sessions (508 runs) |
| Diffusion (DWI) | 613 sessions (2,089 runs) |
| Single-band reference | 88 sessions (322 runs) |
| Expert lesion masks | 228 |
*Sessions with multiple runs of the same structural modality now include all runs as a list.
- Source: OpenNeuro ds004884
- Paper: Gibson et al., Scientific Data 2024
- License: CC0 1.0 (Public Domain)
Supported Tasks
- Lesion Segmentation: Expert-drawn lesion masks enable training/evaluation of stroke lesion segmentation models
- Aphasia Severity Prediction: WAB-AQ scores (0-100) provide continuous severity labels for regression tasks
- Aphasia Type Classification: WAB-derived aphasia type labels (Broca's, Wernicke's, Anomic, etc.)
- Longitudinal Analysis: Multiple sessions per subject enable recovery trajectory modeling
- Diffusion Analysis: Full bval/bvec gradients enable tractography and diffusion modeling
- Task-based fMRI: Naming40 and resting-state runs separated for targeted analysis
Languages
Clinical metadata and documentation are in English.
Dataset Structure
Data Instance
Each row represents a single scanning session (subject + timepoint):
{
"subject_id": "sub-M2001",
"session_id": "ses-1",
"t1w": [<nibabel.Nifti1Image>, ...], # T1-weighted structural (list of runs)
"t2w": [<nibabel.Nifti1Image>, ...], # T2-weighted structural (list of runs)
"t2w_acquisition": "space_2x", # T2w sequence type
"flair": [<nibabel.Nifti1Image>, ...], # FLAIR structural (list of runs)
"bold_naming40": [<Nifti1Image>, ...], # Naming task fMRI runs
"bold_rest": [<Nifti1Image>, ...], # Resting state fMRI runs
"dwi": [<Nifti1Image>, ...], # Diffusion runs
"dwi_bvals": ["0 1000 1000...", ...], # b-values per run
"dwi_bvecs": ["0 0 0\n1 0 0\n...", ...], # b-vectors per run
"sbref": [<Nifti1Image>, ...], # Single-band references
"lesion": <nibabel.Nifti1Image>, # Expert lesion mask
"age_at_stroke": 58.0,
"sex": "M",
"race": "w",
"wab_aq": 72.5,
"wab_days": 180.0,
"wab_type": "Anomic"
}
Data Fields
| Field | Type | Description |
|---|---|---|
subject_id |
string | BIDS subject identifier (e.g., "sub-M2001") |
session_id |
string | BIDS session identifier (e.g., "ses-1") |
t1w |
Sequence[Nifti] | T1-weighted structural MRI runs |
t2w |
Sequence[Nifti] | T2-weighted structural MRI runs |
t2w_acquisition |
string | T2w acquisition type: space_2x, space_no_accel, turbo_spin_echo (nullable) |
flair |
Sequence[Nifti] | FLAIR structural MRI runs |
bold_naming40 |
Sequence[Nifti] | BOLD fMRI runs for naming40 task |
bold_rest |
Sequence[Nifti] | BOLD fMRI runs for resting state |
dwi |
Sequence[Nifti] | Diffusion-weighted imaging runs |
dwi_bvals |
Sequence[string] | b-values for each DWI run (space-separated) |
dwi_bvecs |
Sequence[string] | b-vectors for each DWI run (3 lines, space-separated) |
sbref |
Sequence[Nifti] | Single-band reference images |
lesion |
Nifti | Expert-drawn lesion segmentation mask (nullable) |
age_at_stroke |
float32 | Subject age at stroke onset in years |
sex |
string | Biological sex ("M" or "F") |
race |
string | Self-reported race: "b" (Black), "w" (White), or null |
wab_aq |
float32 | Western Aphasia Battery Aphasia Quotient (0-100) |
wab_days |
float32 | Days since stroke when WAB was administered |
wab_type |
string | Aphasia type classification |
Data Splits
| Split | Sessions | Description |
|---|---|---|
| train | 902 | All sessions (no predefined train/test split) |
Note: Users should implement their own train/validation/test splits, ensuring no subject overlap between splits for valid evaluation.
Dataset Creation
Curation Rationale
The ARC dataset was created to address the lack of large-scale, publicly available neuroimaging data for aphasia research. It enables:
- Development of automated lesion segmentation algorithms
- Machine learning models for aphasia severity prediction
- Studies of brain plasticity and language recovery
Source Data
Data were acquired under studies approved by the Institutional Review Board at the University of South Carolina (per the OpenNeuro ds004884 dataset_description.json).
Annotations
Expert-drawn lesion segmentation masks are provided in derivatives/lesion_masks/.
Personal and Sensitive Information
- Anonymized: OpenNeuro ds004884
dataset_description.jsonstates the final dataset is fully anonymised.
Considerations for Using the Data
Social Impact
This dataset enables research into:
- Improved stroke rehabilitation through better outcome prediction
- Automated clinical tools for aphasia assessment
- Understanding of brain-language relationships
Potential Biases
- Site: Data were acquired under University of South Carolina IRB approval (per OpenNeuro metadata)
- Age: Adult cohort (
age_at_strokeranges from 27 to 80 years in participants.tsv)
Known Limitations
- Not all sessions have all modalities (check for None/empty lists)
- Lesion masks available for 228/230 subjects
- Longitudinal follow-up varies by subject (1-30 sessions)
Usage
from datasets import load_dataset
ds = load_dataset("hugging-science/arc-aphasia-bids", split="train")
# Access a session
session = ds[0]
print(session["subject_id"]) # "sub-M2001"
print(session["t1w"][0]) # nibabel.Nifti1Image
print(session["wab_aq"]) # Aphasia severity score
# Access BOLD by task type
for run in session["bold_naming40"]:
print(f"Naming40 run shape: {run.shape}")
for run in session["bold_rest"]:
print(f"Resting state run shape: {run.shape}")
# Access DWI with gradient information
for i, (dwi_run, bval, bvec) in enumerate(zip(
session["dwi"], session["dwi_bvals"], session["dwi_bvecs"]
)):
print(f"DWI run {i+1}: shape={dwi_run.shape}")
print(f" b-values: {bval[:50]}...")
print(f" b-vectors: {bvec[:50]}...")
# Filter by T2w acquisition type (for paper replication)
space_only = ds.filter(
lambda x: (
x["lesion"] is not None
and len(x["t2w"]) > 0
and x["t2w_acquisition"] in ("space_2x", "space_no_accel")
)
)
# Returns 223 SPACE samples (115 space_2x + 108 space_no_accel)
# Clinical metadata analysis
import pandas as pd
# Select only scalar columns to avoid loading NIfTI columns into RAM
df = ds.select_columns([
"subject_id", "session_id", "age_at_stroke",
"sex", "race", "wab_aq", "wab_days", "wab_type"
]).to_pandas()
print(df.describe())
Technical Notes
Multi-Run Modalities
Functional and diffusion modalities support multiple runs per session:
- Empty list
[]= no data for this session - List with items = all runs for this session, sorted by filename
DWI Gradient Files
Each DWI run has aligned gradient information:
dwi_bvals: Space-separated b-values (e.g., "0 1000 1000 1000...")dwi_bvecs: Three lines of space-separated vectors (x, y, z directions)
These are essential for diffusion tensor imaging (DTI) and tractography analysis.
Memory Considerations
NIfTI files are loaded on-demand. For large-scale processing:
for session in ds:
process(session)
# Data is garbage collected after each iteration
Original BIDS Source
This dataset is derived from OpenNeuro ds004884. The original BIDS structure is preserved in the column naming and organization.
Additional Information
Dataset Curators
- Original Dataset: Gibson et al. (University of South Carolina)
- HuggingFace Conversion: The-Obstacle-Is-The-Way
Licensing
This dataset is released under CC0 1.0 Universal (Public Domain). You can copy, modify, distribute, and perform the work, even for commercial purposes, all without asking permission.
Citation
@article{gibson2024arc,
title={The Aphasia Recovery Cohort, an open-source chronic stroke repository},
author={Gibson, Makayla and Newman-Norlund, Roger and Bonilha, Leonardo and Fridriksson, Julius and Hickok, Gregory and Hillis, Argye E and den Ouden, Dirk-Bart and Rorden, Christopher},
journal={Scientific Data},
volume={11},
pages={981},
year={2024},
publisher={Nature Publishing Group},
doi={10.1038/s41597-024-03819-7}
}
Contributions
Thanks to @The-Obstacle-Is-The-Way for converting this dataset to HuggingFace format with native Nifti() feature support.
Changelog
v4 (December 2025)
- BREAKING:
t1w,t2w,flairchanged fromNifti()toSequence(Nifti())for full data fidelity - FIX: 6 sessions with multiple structural runs now include all files (previously set to
None) - NOTE: Most sessions have exactly 1 structural scan; access via
session["t2w"][0]
v3 (December 2025)
- RETRACTED: Attempted fix for 222 → 223 SPACE samples was incorrect diagnosis
- NOTE: The missing sample is caused by a schema design flaw (see v4 fix above), not upload issues
v2 (December 2025)
- BREAKING:
boldcolumn split intobold_naming40andbold_restfor task-specific analysis - NEW:
dwi_bvalsanddwi_bvecscolumns for diffusion gradient information - NEW:
racecolumn from participants.tsv - NEW:
wab_dayscolumn (days since stroke when WAB administered) - NEW:
t2w_acquisitioncolumn for T2w sequence type filtering
v1 (December 2025)
- Initial release with 13 columns
- Downloads last month
- 1,218