TDC / README.md
haneulpark's picture
Upload README.md with huggingface_hub
1f37c2e verified
metadata
verison: 1.0.0
license: mit
task_categories:
  - tabular-classification
  - tabular-regression
language:
  - en
tags:
  - bioactivity
  - therapeutic science
pretty_name: Therapeutics Data Commons
size_categories:
  - 10M<n<100M
dataset_summary: >-
  Therapeutics Data Commons (TDC) provides curated, AI-ready datasets, machine
  learning tasks, and benchmarks with meaningful data splits, supporting the
  development and evaluation of AI methods for therapeutic discovery. TDC tasks
  are categorized into three main problem types: (1) single-instance prediction,
  (2) multi-instance learning, and (3) molecule generation.
citation: |-
  @article{Huang2021tdc,
    title={Therapeutics Data Commons: Machine Learning Datasets and Tasks for Drug Discovery and Development},
    author={Huang, Kexin and Fu, Tianfan and Gao, Wenhao and Zhao, Yue and Roohani, Yusuf and Leskovec, Jure and Coley, Connor W and Xiao, Cao and Sun, Jimeng and Zitnik, Marinka},
    journal={Proceedings of Neural Information Processing Systems, NeurIPS Datasets and Benchmarks},
    year={2021}
  }
  @article{Huang2022artificial,
    title={Artificial intelligence foundation for therapeutic science},
    author={Huang, Kexin and Fu, Tianfan and Gao, Wenhao and Zhao, Yue and Roohani, Yusuf and Leskovec, Jure and Coley, Connor W and Xiao, Cao and Sun, Jimeng and Zitnik, Marinka},
    journal={Nature Chemical Biology},
    year={2022}
  }
  @article{velez-arce2024signals,
    title={Signals in the Cells: Multimodal and Contextualized Machine Learning Foundations for Therapeutics},
    author={Velez-Arce, Alejandro and Lin, Xiang and Huang, Kexin and Li, Michelle M and Gao, Wenhao and Pentelute, Bradley and Fu, Tianfan and Kellis, Manolis and Zitnik, Marinka},
    booktitle={NeurIPS 2024 Workshop on AI for New Drug Modalities},
    year={2024}
  }
config_names:
  - ADME
  - Tox
  - HTS
  - CRISPROutcome
  - Develop
  - Epitope
  - QM
configs:
  - config_name: ADME
    data_files: single_instance_prediction_datasets/ADME.parquet
    columns:
      - Task
      - Drug_ID
      - SMILES
      - 'Y'
      - split
  - config_name: Tox
    data_files: single_instance_prediction_datasets/Tox.parquet
    columns:
      - Task
      - Drug_ID
      - SMILES
      - 'Y'
      - split
  - config_name: HTS
    data_files: single_instance_prediction_datasets/HTS.parquet
    columns:
      - Task
      - Drug_ID
      - SMILES
      - 'Y'
      - split
  - config_name: CRISPROutcome
    data_files: single_instance_prediction_datasets/CRISPROutcome.parquet
    columns:
      - Task
      - GuideSeq_ID
      - GuideSeq
      - 'Y'
      - split
  - config_name: Develop
    data_files: single_instance_prediction_datasets/Develop.parquet
    columns:
      - Task
      - Antibody_ID
      - heavy_chain
      - light_chain
      - 'Y'
      - split
  - config_name: Epitope
    data_files: single_instance_prediction_datasets/Epitope.parquet
    columns:
      - Task
      - Antigen_ID
      - Antigen
      - 'Y'
      - split
  - config_name: QM
    data_files: single_instance_prediction_datasets/QM.parquet
    columns:
      - Task
      - Drug_ID
      - Atom
      - x_coordinate
      - y_coordinate
      - z_coordinate
      - 'Y'
      - split

Therapeutics Data Commons

Therapeutics Data Commons(TDC) provides a publicly available collection of 22 machine learning tasks for therapeutic discovery. Our Hugging Face repository is a mirror of single-instance prediction tasks of TDC, encompassing a total of 46,265,659 data points.

The parquet files uploaded to our Hugging Face repository have been sanitized and curated from the original datasets.

  • Each parquet file corresponds to a separate category (e.g., ADME) and contains multiple tasks (e.g., solubility, permeability).
  • Each file follows its own schema (i.e., different column names) and has been preprocessed differently depending on the category.
  • As ADME, Tox, and HTS datasets contain SMILES strings, we have sanitized (standardized) the SMILES strings. The sanitization process includes removing salts, standardizing the SMILES strings to a canonical form, etc. 2 invalid SMILES strings from ADME dataset and 59 invalid SMILES strings from HTS dataset were removed during the sanitization process.

A summary file (i.e., single_instance_prediction_summary.csv) is also uploaded, which lists:

Quick Usage

Load a dataset in python

Each subset can be loaded into python using the Huggingface datasets library. First, from the command line install the datasets library

$ pip install datasets

then, from within python load the datasets library.

>>> import datasets

Now load one of the 'TDC' datasets, e.g.,

>>> dataset = datasets.load_dataset("maomlab/TDC", name = "ADME")

You can modify "name" based on your interest (e.g., "Tox").