Hyformer
Hyformer is a joint transformer-based model that unifies a generative decoder with a predictive encoder. Depending on the task, Hyformer uses either a causal or a bidirectional mask, outputting token probabilities or predicted property values.
Model Details
- Paper: Synergistic Benefits of Joint Molecule Generation and Property Prediction
- Authors: Adam Izdebski, Jan Olszewski, Pankhil Gawade, Krzysztof Koras, Serra Korkmaz, Valentin Rauscher, Jakub M. Tomczak, Ewa Szczurek
- License: BSD 3-Clause
- Repository: https://github.com/szczurek-lab/hyformer
Model checkpoints
- Hyformer-8M: Trained on GuacaMol dataset (Brown et al., 2019)
- Hyformer-50M: Trained on 19M molecules from ZINC, ChEMBL, and other purchasable molecular datasets (Zhou et al., 2023)
Gated Access
This model is available with gated access. To request access, please use the Hugging Face gated request form.
Citation
If you use this model, please cite:
@misc{izdebski2025synergisticbenefitsjointmolecule,
title={Synergistic Benefits of Joint Molecule Generation and Property Prediction},
author={Adam Izdebski and Jan Olszewski and Pankhil Gawade and Krzysztof Koras and Serra Korkmaz and Valentin Rauscher and Jakub M. Tomczak and Ewa Szczurek},
year={2025},
eprint={2504.16559},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2504.16559},
}
References
- Brown, Nathan, et al. "GuacaMol: benchmarking models for de novo molecular design." Journal of chemical information and modeling, 2019.
- Zhou, Gengmo, et al. "Uni-mol: A universal 3d molecular representation learning framework." ICLR, 2023.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support