joonavel's picture
Update README.md
db4cfaa verified
|
raw
history blame
3.38 kB
---
base_model: unsloth/qwen2.5-coder-32b-instruct-bnb-4bit
library_name: peft
datasets:
- 100suping/ko-bird-sql-schema
- won75/text_to_sql_ko
language:
- ko
pipeline_tag: text-generation
tags:
- SQL
- lora
- adapter
- instruction-tuning
---
# 100suping/Qwen2.5-Coder-34B-Instruct-kosql-adapter
<!-- Provide a quick summary of what the model is/does. -->
This Repo contains **LoRA (Low-Rank Adaptation) Adapter** for [unsloth/qwen2.5-coder-32b-instruct]
This adapter was created through **instruction tuning**.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Base Model:** unsloth/Qwen2.5-Coder-32B-Instruct
- **Task:** Instruction Following(Korean)
- **Language:** English (or relevant language)
- **Training Data:** 100suping/ko-bird-sql-schema, won75/text_to_sql_ko
- **Model type:** Causal Language Models.
- **Language(s) (NLP):** Multi-Language
## How to Get Started with the Model
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
To use this LoRA adapter, refer to the following code:
### Prompt
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
```
```
### Inference
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
```
```
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
```
```
#### Preprocessing [optional]
```
```
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2