Llama-3.1-8B-Instruct LoRA Fine-tuned on Indian Accounting Standards (Ind AS)

This model is a LoRA (Low-Rank Adaptation) fine-tuned version of meta-llama/Llama-3.1-8B-Instruct, specifically trained on Indian Accounting Standards (Ind AS) data.

Model Details

  • Base Model: meta-llama/Llama-3.1-8B-Instruct (latest Meta model)
  • Fine-tuning Method: LoRA (Low-Rank Adaptation)
  • Training Data: 269 high-quality examples covering Indian Accounting Standards
  • Domain: Finance and Accounting (India-specific)
  • Language: English

Training Details

  • Training Epochs: 3
  • LoRA Rank: 64
  • LoRA Alpha: 16
  • Learning Rate: 2e-4
  • Batch Size: 8 (effective)
  • Max Sequence Length: 2048
  • Quantization: 4-bit (nf4)

Use Cases

This model is designed to answer questions about:

  • Indian Accounting Standards (Ind AS)
  • Financial statement presentation
  • Balance sheet preparation
  • Profit and loss statements
  • Cash flow statements
  • Accounting policies and disclosures
  • Going concern assessments
  • Asset and liability classification

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

# Load base model (requires Meta approval)
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.1-8B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B-Instruct")

# Load LoRA weights
model = PeftModel.from_pretrained(base_model, "your-username/llama-3.1-8b-indas-lora")

# Use the model for inference
prompt = "Define the objective of Ind AS 1."
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(inputs, max_new_tokens=512)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)

Base Model Notes

  • Uses official meta-llama/Llama-3.1-8B-Instruct
  • Latest generation with improved instruction following
  • Better performance on complex reasoning tasks
  • Requires Meta approval but usually granted quickly
  • Enhanced capabilities compared to Llama 2

Limitations

  • This model is specifically trained on Indian Accounting Standards and may not perform well on other accounting standards (IFRS, US GAAP, etc.)
  • The training data focuses on Ind AS 1 and related standards
  • Responses should be verified with official accounting standards documents
  • Not suitable for providing legal or investment advice

Training Infrastructure

  • Platform: Google Colab Pro+
  • GPU: T4/A100 (depending on availability)
  • Memory Optimization: 4-bit quantization with LoRA

Citation

If you use this model, please cite:

@misc{llama3.1-indas-lora,
  title={Llama-3.1-8B-Instruct LoRA Fine-tuned on Indian Accounting Standards},
  author={Your Name},
  year={2024},
  url={https://huggingface.co/0xadityam/llama-3.1-8b-indas-lora}
}
Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for 0xadityam/llama-2-8b-indas-lora

Adapter
(1191)
this model