At the moment of writing the 🤗 transformers library doesn't have a Llama implementation for Token Classification (although there is a open PR).

This model is based on a implementation by community member @KoichiYasuoka.

  • Base Model: unsloth/llama-2-7b-bnb-4bit
  • LORA Model Adaptation with rank 8 and alpha 32, other adapter configurations can be found in adapter_config.json

This model was only trained for a single epoch, however a notebook is made available for those who want to train on other datasets for longer.

Downloads last month
5
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the HF Inference API does not support peft models with pipeline type token-classification

Dataset used to train SauravMaheshkar/unsloth-llama-2-7b-bnb-4bit-conll2003-rank-8

Collection including SauravMaheshkar/unsloth-llama-2-7b-bnb-4bit-conll2003-rank-8