Transformers
Safetensors
English
code
File size: 6,591 Bytes
18c70a2
 
5a496e0
 
 
 
 
 
 
 
 
18c70a2
 
 
 
 
 
 
 
 
 
 
 
5a496e0
18c70a2
5a496e0
 
 
 
 
 
 
18c70a2
5a496e0
18c70a2
 
 
5a496e0
 
18c70a2
 
 
 
 
5a496e0
 
 
18c70a2
5a496e0
18c70a2
5a496e0
 
 
18c70a2
 
 
5a496e0
18c70a2
 
 
 
 
5a496e0
 
 
18c70a2
 
 
 
 
5a496e0
 
 
18c70a2
 
 
 
 
5a496e0
 
 
 
 
 
 
18c70a2
 
 
 
 
5a496e0
 
 
 
 
 
 
 
 
 
 
18c70a2
 
 
 
 
 
 
5a496e0
18c70a2
 
 
 
5a496e0
18c70a2
 
 
5a496e0
 
 
 
 
18c70a2
 
 
 
5a496e0
 
 
 
 
 
18c70a2
5a496e0
 
 
 
 
 
 
 
 
18c70a2
 
5a496e0
18c70a2
5a496e0
18c70a2
5a496e0
18c70a2
5a496e0
18c70a2
 
 
 
 
 
 
5a496e0
18c70a2
 
 
 
 
5a496e0
18c70a2
 
 
5a496e0
18c70a2
5a496e0
18c70a2
5a496e0
18c70a2
5a496e0
18c70a2
5a496e0
18c70a2
5a496e0
18c70a2
 
 
 
 
 
 
5a496e0
 
 
 
 
18c70a2
5a496e0
18c70a2
 
 
5a496e0
18c70a2
 
 
5a496e0
18c70a2
5a496e0
18c70a2
5a496e0
18c70a2
5a496e0
 
18c70a2
5a496e0
18c70a2
 
 
 
5a496e0
 
 
 
 
 
 
 
18c70a2
 
 
5a496e0
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
---
library_name: transformers
tags:
- code
license: mit
base_model:
- distilbert/distilgpt2
datasets:
- teven/code_contests
language:
- en
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->



## Model Details

### Model Description

This model is a LoRA fine-tuned version of distilgpt2, optimized for generating programming solutions in a style similar to competitive programming platforms such as LeetCode and Codeforces. It was trained on a custom dataset of ~5000 coding questions and answers and designed to be deployed with low-resource hardware (4GB VRAM GPU RTX 3050). The model is part of a larger project that incorporates Retrieval-Augmented Generation (RAG) to personalize outputs according to a user's historical coding patterns.

- **Developed by:** https://github.com/Srinidhi-Yoganand
- **Funded by :** Self-funded 
- **Shared by :** sriniidhi
- **Model type:** Causal Language Model (Decoder-only)
- **Language(s) (NLP):** English (programming-focused)
- **License:** MIT 
- **Finetuned from model :** distilgpt2

### Model Sources 

<!-- Provide the basic links for the model. -->

- **Repository:** [Link]
- **Demo:** [Link]

## Uses

### Direct Use

This model can be used for:

  -  Auto-completing coding problems with competitive programming-style answers

  -  Assisting in learning algorithms by showing step-by-step code solutions

  -  Experimenting with personalized coding assistants

### Downstream Use 

<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->

It can be plugged into systems using RAG to personalize answers by analyzing a user’s prior code submissions, or integrated into IDE plugins or chat-based tutoring systems.

### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->

  - Generating natural language responses outside of programming tasks

  - Mission-critical code generation (e.g., medical, legal, or financial systems)

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

- May hallucinate code or logic for uncommon problems.

- Not robust to complex multi-language code interactions or frameworks.

### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

- Use in combination with RAG for best personalization results.

- Validate generated code before execution.

- Avoid relying solely on this model for production-critical code generation.

## How to Get Started with the Model

Use the code below to get started with the model.

```
from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("sriniidhi/gpt2-coding")
tokenizer = AutoTokenizer.from_pretrained("sriniidhi/gpt2-coding")

prompt = "def two_sum(nums, target):"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

## Training Details

### Training Data

<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->

The dataset consists of 5000+ competitive programming Q&A-style examples extracted and formatted from LeetCode, Codeforces, and similar platforms. Each entry includes a problem prompt and a sample Python, Java, Cpp solution.

### Training Procedure

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
LoRA fine-tuning using peft and transformers on top of distilgpt2.

#### Preprocessing [optional]

- Tokenized using GPT2TokenizerFast

- Prompt-style formatting with problem + solution pairs

- All code lowercased for consistency


#### Training Hyperparameters

- **Training regime:**
- Epochs: 3

- Batch size: 2

- Learning rate: 5e-5

- LoRA rank: 8

- Precision: fp16

- Max length: 512

- Optimizer: AdamW
  
#### Speeds, Sizes, Times 

<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
- Fine-tuned on Google Colab and locally on RTX 3050 4GB

- Training duration: ~30 hours

- LoRA-adapted weights: ~75MB

## Evaluation

### Testing Data, Factors & Metrics

#### Testing Data

<!-- This should link to a Dataset Card if possible. -->

Evaluation done on 10000 held-out samples not used during training.

#### Metrics

<!-- These are the evaluation metrics being used, ideally with a description of why. -->

Manual evaluation of logical correctness and style similarity

### Results

- Approx. 80% logical match on simple algorithm questions

- Maintains coding style reasonably for most basic prompts

- Some struggles with complex nested logic

## Model Examination 

- Focused on learning indentation, loop constructs, and simple algorithm templates

- No external code memory or global context unless paired with RAG

## Environmental Impact

<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->

Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).

- **Hardware Type:** NVIDIA RTX 3050 (4GB VRAM)
- **Hours used:** ~30 hrs 
- **Cloud Provider:** Google Colab (partial)
- **Compute Region:** India
- **Carbon Emitted:** ~2.15 kg CO₂ eq (estimated)

## Technical Specifications 

### Model Architecture and Objective

Decoder-only Transformer (distilgpt2, 6-layer GPT-2)

### Compute Infrastructure

- Colab + Local

- PyTorch, transformers, peft, bitsandbytes

#### Hardware

- Ryzen 7
- RTX 3050

## Citation 

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**
```
@misc{gpt2-coding,
  author       = {Srinidhi},
  title        = {LoRA Fine-tuned distilgpt2 for Code Generation},
  year         = {2025},
  url          = {https://huggingface.co/sriniidhi/gpt2-coding}
}
```

## Model Card Contact

GitHub: https://github.com/Srinidhi-Yoganand

Hugging Face: https://huggingface.co/sriniidhi