File size: 707 Bytes
638f538 24fe217 2cb696c 638f538 48a4049 638f538 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
---
datasets:
- beomi/KoAlpaca-v1.1a
language:
- ko
- en
base_model:
- meta-llama/Llama-3.3-70B-Instruct
base_model_relation: quantized
library_name: transformers
---
## Model Details
**Llama-3.3-70B-Korean-4Bit-bnb**
Llama-3.3-70B-Korean-4Bit-bnb is continued pretrained(4bit quantization fine-tuned) language model based on Llama-3.3-70B-Instruct.
This model is trained fully with publicily available resource at HuggingFace dataset hub, preprocessed Korean texts.
The train was done on A6000 48GB * 4.
**Model developers** Dongwook Min (mindw96)
**Dataset** beomi/KoAlpaca-v1.1a
**Input** Models input text only.
**Output** Models generate text and code.
**Model Release Date** 04.01.2025. |