mindw96's picture
Update README.md
22429d8 verified
|
raw
history blame
780 Bytes
metadata
datasets:
  - beomi/KoAlpaca-v1.1a
language:
  - ko
  - en
base_model:
  - meta-llama/Llama-3.3-70B-Instruct
base_model_relation: quantized
library_name: transformers

Model Details

Llama-3.3-70B-Korean-4Bit-bnb

Llama-3.3-70B-Korean-4Bit-bnb is continued pretrained(4bit quantization fine-tuned) language model based on Llama-3.3-70B-Instruct.

This model is trained fully with publicily available resource at HuggingFace dataset hub, preprocessed Korean texts.

The train was done on A6000 48GB * 4.

Model developers Dongwook Min (mindw96)

Dataset beomi/KoAlpaca-v1.1a

Variations Llama-3.3-70B-Korean-4Bit-bnb comes in one size — 70B.

Input Models input text only.

Output Models generate text and code.

Model Release Date 04.01.2025.