SamuelJaja's picture
Update README.md
8ac8b8f verified
metadata
language: en
license: apache-2.0
library_name: peft
pipeline_tag: text-generation
tags:
  - llama
  - construction
  - building-regulations
  - lora
  - custom construction industry dataset

LLAMA3.1-8B-Construction

This is a fine-tuned version of LLAMA3.1-8B optimized for construction industry and building regulations knowledge.

Model Details

  • Base Model: meta-llama/Llama-3.1-8B
  • Fine-tuning Method: LoRA (Low-Rank Adaptation)
  • Training Data: Custom dataset focusing on construction industry standards, building regulations, and safety requirements
  • Usage: This model is designed to answer questions about building codes, construction best practices, and regulatory compliance

Example Usage

from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import PeftModel, PeftConfig
import torch

# Load the adapter configuration
config = PeftConfig.from_pretrained("SamuelJaja/llama-3.1-8b-construction")

# Load base model with quantization
bnb_config = BitsAndBytesConfig(load_in_8bit=True)
model = AutoModelForCausalLM.from_pretrained(
    config.base_model_name_or_path,
    quantization_config=bnb_config,
    device_map="auto"
)

# Load LoRA adapter
model = PeftModel.from_pretrained(model, "SamuelJaja/llama-3.1-8b-construction")

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
tokenizer.pad_token = tokenizer.eos_token

# Generate text
prompt = "[INST] What are the main requirements for fire safety in commercial buildings? [/INST]"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(
    input_ids=inputs.input_ids,
    attention_mask=inputs.attention_mask,
    max_new_tokens=512,
    temperature=0.7,
    top_p=0.9,
    do_sample=True
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))