File size: 3,379 Bytes
1680a25 db4cfaa 1680a25 db4cfaa 1680a25 db4cfaa 1680a25 db4cfaa 1680a25 db4cfaa 1680a25 db4cfaa 1680a25 db4cfaa 1680a25 db4cfaa 1680a25 db4cfaa 1680a25 db4cfaa 1680a25 db4cfaa 1680a25 db4cfaa 1680a25 db4cfaa 1680a25 db4cfaa 1680a25 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 |
---
base_model: unsloth/qwen2.5-coder-32b-instruct-bnb-4bit
library_name: peft
datasets:
- 100suping/ko-bird-sql-schema
- won75/text_to_sql_ko
language:
- ko
pipeline_tag: text-generation
tags:
- SQL
- lora
- adapter
- instruction-tuning
---
# 100suping/Qwen2.5-Coder-34B-Instruct-kosql-adapter
<!-- Provide a quick summary of what the model is/does. -->
This Repo contains **LoRA (Low-Rank Adaptation) Adapter** for [unsloth/qwen2.5-coder-32b-instruct]
This adapter was created through **instruction tuning**.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Base Model:** unsloth/Qwen2.5-Coder-32B-Instruct
- **Task:** Instruction Following(Korean)
- **Language:** English (or relevant language)
- **Training Data:** 100suping/ko-bird-sql-schema, won75/text_to_sql_ko
- **Model type:** Causal Language Models.
- **Language(s) (NLP):** Multi-Language
## How to Get Started with the Model
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
To use this LoRA adapter, refer to the following code:
### Prompt
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
```
```
### Inference
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
```
```
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
```
```
#### Preprocessing [optional]
```
```
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |