Hierarchy Transformers (HiTs)

university

AI & ML interests

This collection includes language models trained on hierarchies using hyperbolic losses. The resulting HiT models yield entity embeddings that are hierarchically organised in hyperbolic space.

Recent Activity

Hierarchy Transformer

Hierarchy Transformer (HiT) is a framework that enables transformer encoder-based language models (LMs) to learn hierarchical structures in hyperbolic space.

Get Started

Install hierarchy_tranformers (check our repository) through pip or GitHub.

Use the following code to get started with HiTs:

from hierarchy_transformers import HierarchyTransformer

# load the model
model = HierarchyTransformer.from_pretrained('Hierarchy-Transformers/HiT-MiniLM-L12-WordNetNoun')

# entity names to be encoded.
entity_names = ["computer", "personal computer", "fruit", "berry"]

# get the entity embeddings
entity_embeddings = model.encode(entity_names)

Citation

Yuan He, Zhangdie Yuan, Jiaoyan Chen, Ian Horrocks. Language Models as Hierarchy Encoders. Advances in Neural Information Processing Systems 37 (NeurIPS 2024).

@inproceedings{NEURIPS2024_1a970a3e,
 author = {He, Yuan and Yuan, Moy and Chen, Jiaoyan and Horrocks, Ian},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {A. Globerson and L. Mackey and D. Belgrave and A. Fan and U. Paquet and J. Tomczak and C. Zhang},
 pages = {14690--14711},
 publisher = {Curran Associates, Inc.},
 title = {Language Models as Hierarchy Encoders},
 url = {https://proceedings.neurips.cc/paper_files/paper/2024/file/1a970a3e62ac31c76ec3cea3a9f68fdf-Paper-Conference.pdf},
 volume = {37},
 year = {2024}
}