WWW2025. OntoTune: Ontology-Driven Self-training for Aligning Large Language Models
π Introduction
- This is the model parameter of OntoTune$_{dpo}$ fine-tuned based on Llama3 8B-Instruct.
- This work was supported by Ant Group and Zhejiang University - Ant Group Joint Laboratory of Knowledge Graph
π Citation
Please consider citing this paper if you find our work useful.
@inproceedings{DBLP:conf/www/LiuGWZBSC025,
title = {OntoTune: Ontology-Driven Self-training for Aligning Large Language Models},
author = {Zhiqiang Liu and
Chengtao Gan and
Junjie Wang and
Yichi Zhang and
Zhongpu Bo and
Mengshu Sun and
Huajun Chen and
Wen Zhang},
editor = {Guodong Long and
Michale Blumestein and
Yi Chang and
Liane Lewin{-}Eytan and
Zi Helen Huang and
Elad Yom{-}Tov},
booktitle = {Proceedings of the {ACM} on Web Conference 2025, {WWW} 2025, Sydney, NSW, Australia, 28 April 2025- 2 May 2025},
pages = {119--133},
publisher = {{ACM}},
year = {2025},
url = {https://doi.org/10.1145/3696410.3714816},
doi = {10.1145/3696410.3714816},
timestamp = {Wed, 23 Apr 2025 16:35:50 +0200},
biburl = {https://dblp.org/rec/conf/www/LiuGWZBSC025.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
- Downloads last month
- 59
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support