--- license: apache-2.0 datasets: - HiTZ/GuideX_pre-training_data - ACE05 - bc5cdr - conll2003 - ncbi_disease - conll2012_ontonotesv5 - rams - tacred - wnut_17 language: - en metrics: - f1 library_name: transformers pipeline_tag: text-generation base_model: - meta-llama/Llama-3.1-8B tags: - code - text-generation-inference - Information Extraction - IE - Named Entity Recogniton - Event Extraction - Relation Extraction - LLaMA ---


Guided Synthetic Data Generation for Zero-Shot Information Extraction


**Llama-3.1-GuideX-8B** is an 8-billion-parameter language model fine-tuned for high-performance zero-shot Information Extraction (IE). The model is trained to follow detailed annotation guidelines provided as Python dataclasses, allowing it to adapt to new domains and schemas on the fly without requiring task-specific examples. This model achieves state-of-the-art performance on zero-shot Named Entity Recognition (NER) by first training on `GuideX`, a large-scale synthetic dataset with executable guidelines, and then fine-tuning on a collection of gold-standard IE datasets. - 💻 **Project Page:** [https://neilus03.github.io/guidex.com/](https://neilus03.github.io/guidex.com/) - 📒 **Code:** [GuideX codebase](https://github.com/Neilus03/GUIDEX) - 📖 **Paper:** [GuideX: Guided Synthetic Data Generation for Zero-Shot Information Extraction](https://arxiv.org/abs/2506.00649) - 🐕 **GuideX Colection in the 🤗HuggingFace Hub:** [GuideX Collection](https://huggingface.co/collections/neildlf/guidex-6842ef478e8d9bb0a00c844d) - 🚀 **Example Jupyter Notebooks:** [GuideX Notebooks](https://github.com/Neilus03/GUIDEX/tree/dev-neil/notebooks) ## Model Description - **Developed by:** Neil De La Fuente, Oscar Sainz, Iker García-Ferrero, Eneko Agirre - **Institution:** HiTZ Basque Center for Language Technology - Ixa NLP Group, University of the Basque Country (UPV/EHU), Technical University of Munich (TUM) - **Model type:** Decoder-only Transformer (Text Generation) - **Language(s):** English - **License:** Llama 2 Community License - **Finetuned from model:** `meta-llama/Llama-3.1-8B` ## Schema definition and inference example The labels are represented as Python classes, and the guidelines or instructions are introduced as docstrings. The model start generating after the `result = [` line. ```Python # Entity definitions @dataclass class Launcher(Template): """Refers to a vehicle designed primarily to transport payloads from the Earth's surface to space. Launchers can carry various payloads, including satellites, crewed spacecraft, and cargo, into various orbits or even beyond Earth's orbit. They are usually multi-stage vehicles that use rocket engines for propulsion.""" mention: str """ The name of the launcher vehicle. Such as: "Sturn V", "Atlas V", "Soyuz", "Ariane 5" """ space_company: str # The company that operates the launcher. Such as: "Blue origin", "ESA", "Boeing", "ISRO", "Northrop Grumman", "Arianespace" crew: List[str] # Names of the crew members boarding the Launcher. Such as: "Neil Armstrong", "Michael Collins", "Buzz Aldrin" @dataclass class Mission(Template): """Any planned or accomplished journey beyond Earth's atmosphere with specific objectives, either crewed or uncrewed. It includes missions to satellites, the International Space Station (ISS), other celestial bodies, and deep space.""" mention: str """ The name of the mission. Such as: "Apollo 11", "Artemis", "Mercury" """ date: str # The start date of the mission departure: str # The place from which the vehicle will be launched. Such as: "Florida", "Houston", "French Guiana" destination: str # The place or planet to which the launcher will be sent. Such as "Moon", "low-orbit", "Saturn" # This is the text to analyze text = ( "The Ares 3 mission to Mars is scheduled for 2032. The Starship rocket build by SpaceX will take off from Boca Chica," "carrying the astronauts Max Rutherford, Elena Soto, and Jake Martinez." ) # The annotation instances that take place in the text above are listed here result = [ Mission(mention='Ares 3', date='2032', departure='Boca Chica', destination='Mars'), Launcher(mention='Starship', space_company='SpaceX', crew=['Max Rutherford', 'Elena Soto', 'Jake Martinez']) ] ``` ## How to Get Started with the Model Please read our [🚀 Example Jupyter Notebooks](https://github.com/Neilus03/GUIDEX/tree/dev-neil/notebooks) to get started with GuideX. The best way to load the model is using our custom `load_model` fuction. However, you can also load them using the AutoModelForCausalLM class. **Important**: Our flash attention implementation has small numerical differences compared to the attention implementation in Huggingface. You must use the flag `trust_remote_code=True` or you will get inferior results. Flash attention requires an available CUDA GPU. Running GuideX pre-trained models on a CPU is not supported. We plan to address this in future releases. First, install flash attention 2: ```bash pip install flash-attn --no-build-isolation pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary ``` Then you can load the model using ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("HiTZ/Llama-3.1-GuideX-8B") model = AutoModelForCausalLM.from_pretrained("HiTZ/Llama-3.1-GuideX-8B", trust_remote_code=True, torch_dtype=torch.bfloat16) model.to("cuda") ``` Read our [🚀 Example Jupyter Notebooks](https://github.com/hitz-zentroa/GoLLIE/tree/main/notebooks) to learn how to easily define guidelines, generate model inputs and parse the output! ``` ## Citation @misc{delafuente2025guidexguidedsyntheticdata, title={GuideX: Guided Synthetic Data Generation for Zero-Shot Information Extraction}, author={Neil De La Fuente and Oscar Sainz and Iker García-Ferrero and Eneko Agirre}, year={2025}, eprint={2506.00649}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2506.00649}, } ```