Finetune script ?

#8
by Daemontatox - opened

Amazing work , is it possible to share the finetuning script for this model series ?

Technology Innovation Institute org
β€’
edited 27 days ago

hello @Daemontatox ,

this is a simple example using QLORA adapters to finetune our 0.5B instruct :

import torch
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from trl import SFTConfig, SFTTrainer
from peft import LoraConfig, get_peft_model, TaskType

model_name = "tiiuae/Falcon-H1-0.5B-Base"
use_quantization = False

bnb_config = None
if use_quantization:
    bnb_config = BitsAndBytesConfig(
        load_in_4bit=True,
        bnb_4bit_quant_type="nf4",
        bnb_4bit_use_double_quant=True,
        bnb_4bit_compute_dtype=torch.bfloat16,
+       llm_int8_skip_modules=["out_proj"]
    )

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    quantization_config=bnb_config,
    device_map='auto',
)

tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.chat_template = "{{bos_token}}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant' }}{% endif %}"

peft_config = LoraConfig(
    r=16,
    lora_alpha=32,
    lora_dropout=0.1,
    bias="none",
    task_type=TaskType.CAUSAL_LM,
    target_modules=["q_proj", "k_proj", "v_proj", "o_proj",
                        "gate_proj", "up_proj", "down_proj",]
)

model = get_peft_model(model, peft_config)
model.print_trainable_parameters()

dataset = load_dataset('Vikhrmodels/It_hard_4.1', split='train')

def format_data(example):
    conversation = [{'role': item['role'], 'content': item['content']} 
                   for item in example['conversation']]
    example['text'] = tokenizer.apply_chat_template(conversation, tokenize=False)
    return example

dataset = dataset.map(format_data)
dataset = dataset.select(range(100))
train_test = dataset.train_test_split(test_size=0.1)

training_args = SFTConfig(
    num_train_epochs=1,
    learning_rate=5e-5,
    per_device_train_batch_size=1,
    gradient_accumulation_steps=4,
    output_dir="./lora_output",
    logging_steps=10,
    save_steps=100,
    bf16=True,
    max_length=1024,
    remove_unused_columns=True,
)

trainer = SFTTrainer(
    model=model,
    args=training_args,
    train_dataset=train_test['train'],
    eval_dataset=train_test['test'],
    processing_class=tokenizer,
    peft_config=peft_config
)

trainer.train()

model.save_pretrained("./lora_model")

when using LORA , we should pay attention to not include conv1d and mamba out_proj layer in the tarrget modules , this is a PR we raised in PEFT to validate this condition.

for more information ,
transformers should be installed from source: pip install git+https://github.com/huggingface/transformers.git
mamba-ssm / causal-conv1d install from pypi latest: pip install mamba-ssm causal-conv1d --no-build-isolation

@DhiyaEddine many thanks!!!!

Daemontatox changed discussion status to closed

Sign up or log in to comment