File size: 7,750 Bytes
1680a25
 
 
db4cfaa
 
 
 
 
 
 
 
 
 
 
1680a25
 
db4cfaa
1680a25
 
9de3576
 
 
1680a25
db4cfaa
1680a25
 
 
 
 
 
 
 
 
db4cfaa
 
 
 
 
 
1680a25
db4cfaa
1680a25
 
db4cfaa
1680a25
ebcbd64
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
db4cfaa
1680a25
 
 
db4cfaa
9de3576
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
db4cfaa
1680a25
9de3576
db4cfaa
1680a25
 
 
db4cfaa
9de3576
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
db4cfaa
1680a25
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
db4cfaa
 
1680a25
ebcbd64
1680a25
db4cfaa
ebcbd64
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
db4cfaa
1680a25
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
---
base_model: unsloth/qwen2.5-coder-32b-instruct-bnb-4bit
library_name: peft
datasets:
- 100suping/ko-bird-sql-schema
- won75/text_to_sql_ko
language:
- ko
pipeline_tag: text-generation
tags:
- SQL
- lora
- adapter
- instruction-tuning
---

# 100suping/Qwen2.5-Coder-34B-Instruct-kosql-adapter

<!-- Provide a quick summary of what the model is/does. -->
This Repo contains **LoRA (Low-Rank Adaptation) Adapter** for [unsloth/qwen2.5-coder-32b-instruct-bnb-4bit]

The Adapter was trained for improving model's SQL generation capability in Korean question & multi-db context. 

This adapter was created through **instruction tuning**.


## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->


- **Base Model:** unsloth/Qwen2.5-Coder-32B-Instruct
- **Task:** Instruction Following(Korean)
- **Language:** English (or relevant language)
- **Training Data:** 100suping/ko-bird-sql-schema, won75/text_to_sql_ko
- **Model type:** Causal Language Models.
- **Language(s) (NLP):** Multi-Language

## How to Get Started with the Model

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
To use this LoRA adapter, refer to the following code:

### Load Apdater

```
from transformers import BitsAndBytesConfig

def get_bnb_config(bit=8):
    if bit == 8:
        return BitsAndBytesConfig(load_in_8bit=True)
    else:
        print(f"You put {bit} bit in argument.\nWhatever the number you put in, if it is not 8 then 4bit config would be returned.")
        return BitsAndBytesConfig(load_in_4bit=True)
```

```
from unsloth import FastLanguageModel

model_name = "unsloth/Qwen2.5-Coder-32B-Instruct"
adapter_revision = "checkpoint-200" # checkpoint-100 ~ 350, main(which is checkpoint-384)

bnb_config = get_bnb_config(bit=8)
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name=model_name,
    dtype=None,
    quantization_config=bnb_config,
)
model.load_adapter("100suping/Qwen2.5-Coder-34B-Instruct-kosql-adapter", revision=adapter_revision)
```

### Prompt

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->

```
GENERAL_QUERY_PREFIX = """๋‹น์‹ ์€ ์‚ฌ์šฉ์ž์˜ ์ž…๋ ฅ์„ MySQL ์ฟผ๋ฆฌ๋ฌธ์œผ๋กœ ๋ฐ”๊พธ์–ด์ฃผ๋Š” ์กฐ์ง์˜ ํŒ€์›์ž…๋‹ˆ๋‹ค.
๋‹น์‹ ์˜ ์ž„๋ฌด๋Š” DB ์ด๋ฆ„ ๊ทธ๋ฆฌ๊ณ  DB๋‚ด ํ…Œ์ด๋ธ”์˜ ๋ฉ”ํƒ€ ์ •๋ณด๊ฐ€ ๋‹ด๊ธด ์•„๋ž˜์˜ (context)๋ฅผ ์ด์šฉํ•ด์„œ ์ฃผ์–ด์ง„ ์งˆ๋ฌธ(user_question)์— ๊ฑธ๋งž๋Š” MySQL ์ฟผ๋ฆฌ๋ฌธ์„ ์ž‘์„ฑํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค.

(context)
{context}
"""

GENERATE_QUERY_INSTRUCTIONS = """
์ฃผ์–ด์ง„ ์งˆ๋ฌธ(user_question)์— ๋Œ€ํ•ด์„œ ๋ฌธ๋ฒ•์ ์œผ๋กœ ์˜ฌ๋ฐ”๋ฅธ MySQL ์ฟผ๋ฆฌ๋ฌธ์„ ์ž‘์„ฑํ•ด ์ฃผ์„ธ์š”.
"""
```

### Example input

```
<|im_start|>system
๋‹น์‹ ์€ ์‚ฌ์šฉ์ž์˜ ์ž…๋ ฅ์„ MySQL ์ฟผ๋ฆฌ๋ฌธ์œผ๋กœ ๋ฐ”๊พธ์–ด์ฃผ๋Š” ์กฐ์ง์˜ ํŒ€์›์ž…๋‹ˆ๋‹ค.
๋‹น์‹ ์˜ ์ž„๋ฌด๋Š” DB ์ด๋ฆ„ ๊ทธ๋ฆฌ๊ณ  DB๋‚ด ํ…Œ์ด๋ธ”์˜ ๋ฉ”ํƒ€ ์ •๋ณด๊ฐ€ ๋‹ด๊ธด ์•„๋ž˜์˜ (context)๋ฅผ ์ด์šฉํ•ด์„œ ์ฃผ์–ด์ง„ ์งˆ๋ฌธ(user_question)์— ๊ฑธ๋งž๋Š” MySQL ์ฟผ๋ฆฌ๋ฌธ์„ ์ž‘์„ฑํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค.

(context)
DB: movie_platform
table DDL: CREATE TABLE `movies` ( `movie_id` INTEGER `movie_title` TEXT `movie_release_year` INTEGER `movie_url` TEXT `movie_title_language` TEXT `movie_popularity` INTEGER `movie_image_url` TEXT `director_id` TEXT `director_name` TEXT `director_url` TEXT PRIMARY KEY (movie_id) FOREIGN KEY (user_id) REFERENCES `lists_users`(user_id) FOREIGN KEY (user_id) REFERENCES `lists_users`(user_id) FOREIGN KEY (user_id) REFERENCES `lists`(user_id) FOREIGN KEY (list_id) REFERENCES `lists`(list_id) FOREIGN KEY (user_id) REFERENCES `ratings_users`(user_id) FOREIGN KEY (user_id) REFERENCES `lists_users`(user_id) FOREIGN KEY (movie_id) REFERENCES `movies`(movie_id) );


์ฃผ์–ด์ง„ ์งˆ๋ฌธ(user_question)์— ๋Œ€ํ•ด์„œ ๋ฌธ๋ฒ•์ ์œผ๋กœ ์˜ฌ๋ฐ”๋ฅธ MySQL ์ฟผ๋ฆฌ๋ฌธ์„ ์ž‘์„ฑํ•ด ์ฃผ์„ธ์š”.
<|im_end|>
<|im_start|>user
๊ฐ€์žฅ ์ธ๊ธฐ ์žˆ๋Š” ์˜ํ™”๋Š” ๋ฌด์—‡์ธ๊ฐ€์š”? ๊ทธ ์˜ํ™”๋Š” ์–ธ์ œ ๊ฐœ๋ด‰๋˜์—ˆ๊ณ  ๋ˆ„๊ฐ€ ๊ฐ๋…์ธ๊ฐ€์š”?<|im_end|>
<|im_start|>assistant
```sql
SELECT movie_title, movie_release_year, director_name FROM movies ORDER BY movie_popularity DESC LIMIT 1 ;
```<|im_end|>
```


### Inference

<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->

```
messages = [
        {"role": "system", "content": GENERAL_QUERY_PREFIX.format(context=context) + GENERATE_QUERY_INSTRUCTIONS},
        {"role": "user", "content": "user_question: "+ user_question}
    ]


text = tokenizer.apply_chat_template(
        messages,
        tokenize=False,
        add_generation_prompt=True
    )

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=max_new_tokens
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

[More Information Needed]

### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.


## Training Details

### Training Data

<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->

[More Information Needed]

### Training Procedure

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->

```
```

### Preprocess Functions

```
def get_conversation_data(examples):
    questions = examples['question']
    schemas =examples['schema']
    sql_queries =examples['SQL']
    convos = []
    for question, schema, sql in zip(questions, schemas, sql_queries):
        conv = [
        {"role": "system", "content": GENERAL_QUERY_PREFIX.format(context=schema) + GENERATE_QUERY_INSTRUCTIONS},
        {"role": "user", "content": question},
        {"role": "assistant", "content": "```sql\n"+sql+";\n```"}
        ]
        convos.append(conv)
    return {"conversation":convos,}

def formatting_prompts_func(examples):
    convos = examples["conversation"]
    texts = [tokenizer.apply_chat_template(convo, tokenize = False, add_generation_prompt = False) for convo in convos]
    return { "text" : texts, }
```

#### Training Hyperparameters

- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->

#### Speeds, Sizes, Times [optional]

<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->

[More Information Needed]

## Citation [optional]

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

[More Information Needed]

**APA:**

[More Information Needed]

## Glossary [optional]

<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->

[More Information Needed]

## More Information [optional]

[More Information Needed]

## Model Card Authors [optional]

[More Information Needed]

## Model Card Contact

[More Information Needed]
### Framework versions

- PEFT 0.13.2