File size: 5,687 Bytes
f059342
 
 
 
8a77fc6
 
 
 
 
 
f059342
 
8a77fc6
f059342
8a77fc6
 
f059342
8a77fc6
f059342
 
 
 
8a77fc6
 
 
 
 
 
 
 
 
 
 
 
 
 
f059342
8a77fc6
f059342
8a77fc6
f059342
 
8a77fc6
 
f059342
8a77fc6
 
 
f059342
8a77fc6
 
 
f059342
8a77fc6
f059342
 
8a77fc6
 
 
 
f059342
8a77fc6
f059342
8a77fc6
f059342
8a77fc6
f059342
8a77fc6
 
 
f059342
8a77fc6
f059342
8a77fc6
f059342
8a77fc6
 
 
 
 
 
 
f059342
8a77fc6
 
 
 
 
f059342
8a77fc6
f059342
8a77fc6
f059342
8a77fc6
 
 
 
f059342
 
8a77fc6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f059342
8a77fc6
f059342
 
 
8a77fc6
 
 
 
 
f059342
 
8a77fc6
 
 
 
 
 
f059342
8a77fc6
f059342
8a77fc6
 
 
 
 
f059342
8a77fc6
f059342
8a77fc6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f059342
8a77fc6
f059342
8a77fc6
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
---
library_name: transformers
tags:
- unsloth
- qlora
- lora
- llama-3.2
- instruction-tuned
- bf16
- 4bit
---

# Model Card: Sai2076/LLLMA_FINETUNED_PROJEN

A **LLaMA-3.2** based instruction-tuned model fine-tuned with **Unsloth + QLoRA** using 🤗 **Transformers**.  
This model is part of the **ProjGen project**, aimed at enhancing developer productivity through automated project generation and structured code scaffolding.

---

## Model Details

### Model Description
- **Base model:** `meta-llama/Llama-3.2-<SIZE>-Instruct` <!-- replace SIZE with e.g. 8B/70B -->
- **Finetuning method:** Unsloth + QLoRA (LoRA adapters)
- **Precision (train):** 4-bit NF4 quantization (bitsandbytes) + bf16 compute
- **Context length:** 4096
- **Task(s):** Instruction following & project/code generation
- **License:** Inherits from Meta’s LLaMA-3.2 license
- **Developed by:** Sai Praneeth (UAB, ProjGen Project)
- **Finetuned from:** `meta-llama/Llama-3.2-<SIZE>-Instruct`
- **Shared by:** [Sai2076](https://huggingface.co/Sai2076)

### Model Sources
- **Repository:** [Sai2076/LLLMA_FINETUNED_PROJEN](https://huggingface.co/Sai2076/LLLMA_FINETUNED_PROJEN)
- **Project Paper:** ProjGen – Enhanced Developer Productivity for Flask Project Generation with a RAG-Enhanced Fine-Tuned Local LLM
- **Demo (optional):** [link to demo if available]

---

## Intended Uses & Limitations

### Direct Use
- Generating Flask/Django/Streamlit project structures automatically.
- Instruction-following tasks related to software engineering and code generation.

### Downstream Use
- Further fine-tuning on domain-specific datasets (e.g., medical imaging, finance, etc.).
- Integration into developer assistants and productivity tools.

### Out-of-Scope / Limitations
- Not suitable for medical, legal, or financial decision-making without human review.
- May hallucinate or produce insecure/inefficient code if not monitored.

---

## Bias, Risks, and Limitations
The model inherits risks from the base **LLaMA-3.2** model:
- Possible hallucinations and factual inaccuracies.
- Dataset/domain biases reflected in responses.
- Outputs should be validated before production deployment.

**Recommendation:** Always pair outputs with testing, validation, and human oversight.

---

## Getting Started

### Inference (PEFT adapter form)
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig

model_id = "Sai2076/LLLMA_FINETUNED_PROJEN"

tok = AutoTokenizer.from_pretrained(model_id)

bnb = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_use_double_quant=True)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    quantization_config=bnb,
    device_map="auto",
    torch_dtype="auto"
)

prompt = "Generate a Flask project with login, dashboard, and reports."
inputs = tok(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tok.decode(outputs[0], skip_special_tokens=True))
```

---

## Training Details

### Data
- **Dataset:** Custom **ProjGen dataset** built from structured Flask/Django/Streamlit projects and instructions.
- **Size:** [Fill in #samples / tokens]
- **Preprocessing:** Chat-style instruction formatting (system/user/assistant), deduplication, truncation at 4096 tokens.

### Training Procedure
- **Quantization:** 4-bit NF4 + double quantization (bitsandbytes)
- **LoRA Config:**  
  - `r`: 16  
  - `alpha`: 32  
  - `dropout`: 0.05  
  - Target modules: q_proj, k_proj, v_proj, o_proj, gate_up_proj, down_proj
- **Optimizer:** Paged AdamW (32-bit)  
- **LR / Schedule:** 2e-4 with cosine decay + warmup  
- **Batch size:** [fill in effective batch size]  
- **Epochs/Steps:** [fill in from ipynb]  
- **Precision:** bf16 mixed precision  
- **Grad checkpointing:** Enabled  
- **Flash attention:** Enabled (Unsloth optimization)

### Training Hardware
- **GPU:** RTX 4070 (12GB VRAM) [replace with actual if different]
- **Training time:** [fill in hours]
- **Checkpoint size:** ~ (adapter size: ~200MB; merged model size depends on base LLaMA size)

---

## Evaluation

### Data & Metrics
- **Validation set:** Held-out portion of ProjGen dataset
- **Metrics:**  
  - Instruction Following: Exact Match, ROUGE-L  
  - Code Generation: Pass@k (via unit test evaluation)

### Results
| Metric                | Value  | Notes                 |
|-----------------------|--------|-----------------------|
| Validation Loss       | ___    | From training logs    |
| Exact Match / F1      | ___    |                       |
| ROUGE-L / BLEU        | ___    |                       |
| Pass@1                | ___    |                       |

---

## Environmental Impact (estimate)
- **Hardware:** RTX 4070 (12GB VRAM) [replace with actual]  
- **Hours:** [fill in H]  
- **Region/Provider:** [cloud/on-prem]  
- **Estimated CO₂e:** Use [ML CO₂ Impact](https://mlco2.github.io/impact#compute)

---

## Citation

If you use this model, please cite the base model and this project:

**BibTeX (base, example):**
```bibtex
@article{touvron2023llama,
  title={LLaMA: Open and Efficient Foundation Language Models},
  author={Touvron, Hugo and others},
  journal={arXiv preprint arXiv:XXXX.XXXXX},
  year={2023}
}
```

**Your work (fill in):**
```bibtex
@misc{projgen2025,
  title = {ProjGen: Enhanced Developer Productivity for Flask Project Generation with a RAG-Enhanced Fine-Tuned Local LLM},
  author = {Sai Praneeth, Renduchinthala},
  year = {2025},
  howpublished = {\url{https://huggingface.co/Sai2076/LLLMA_FINETUNED_PROJEN}}
}
```

---

## Contact
- **Author:** Sai Praneeth Kumar (UAB)  
- **HF Profile:** [Sai2076](https://huggingface.co/Sai2076)