|
--- |
|
datasets: |
|
- beomi/KoAlpaca-v1.1a |
|
language: |
|
- ko |
|
- en |
|
base_model: |
|
- meta-llama/Llama-3.3-70B-Instruct |
|
base_model_relation: quantized |
|
library_name: transformers |
|
--- |
|
## Model Details |
|
|
|
**Llama-3.3-70B-Korean-4Bit-bnb** |
|
|
|
Llama-3.3-70B-Korean-4Bit-bnb is continued pretrained(4bit quantization fine-tuned) language model based on Llama-3.3-70B-Instruct. |
|
|
|
This model is trained fully with publicily available resource at HuggingFace dataset hub, preprocessed Korean texts. |
|
|
|
The train was done on A6000 48GB * 4. |
|
|
|
**Model developers** Dongwook Min (mindw96) |
|
|
|
**Dataset** beomi/KoAlpaca-v1.1a |
|
|
|
**Input** Models input text only. |
|
|
|
**Output** Models generate text and code. |
|
|
|
**Model Release Date** 04.01.2025. |