π§ kanana-nano-2.1b-instruct-Ko-Reasoning
A large-scale Korean reasoning model fine-tuned from kakaocorp/kanana-nano-2.1b-instruct, designed to excel in logical and multi-hop reasoning tasks in Korean.
π Overview
kanana-nano-2.1b-instruct-Ko-Reasoning is a fine-tuned version of kakaocorp/kanana-nano-2.1b-instruct, specifically optimized for logical reasoning in Korean. This model is part of a broader research initiative to explore:
- The transition from multilingual reasoning LLMs to Korean-specialized reasoning models
- The enhancement of non-reasoning Korean language models into reasoning-capable variants
- The development of open-access models that rival proprietary alternatives in complex reasoning tasks
This model was fine-tuned using a large-scale Korean-English instruction dataset containing diverse multi-hop questions, symbolic logic tasks, and human-crafted reasoning steps.
π§ͺ Benchmark Results
- π All benchmarks were measured using the 0-shot CoT (Chain-of-Thought) method.
- π The Score represents either the accuracy (%) of correct answers or a rating on a 1-10 scale from a judge model.
Benchmark | Score |
---|---|
GPQA diamond | 44.3 |
GSM8K | 54.1 |
HAERAE | 50.3 |
KSM | 35.2 |
Math500 | 66.9 |
π§βπ» Usage
Install Transformers >= 4.50:
pip install -U transformers
Basic example:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "DimensionSTP/kanana-nano-2.1b-instruct-Ko-Reasoning"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "μμΈκ³Ό λΆμ° μ€ μ΄λκ° λ 컀?"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=4096
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
π§ Base Model: kakaocorp/kanana-nano-2.1b-instruct
The base model, kakaocorp/kanana-nano-2.1b-instruct, is a LLM developed by the Kakao Kanana team. For more technical details, refer to the Kanana Technical Report.
π§± Model Architecture
Property | Value |
---|---|
Architecture | LlamaForCausalLM |
Parameters | 2.1B |
Context Length | 8,192 tokens |
Tokenizer | LlamaTokenizer (BPE) |
π Release Date
Mar 2025
This model was released in March 2025 as part of the Ko-Reasoning Series, which focuses on pushing the boundaries of open-source reasoning in Korean using modern LLMs.
π¬ Contact
For questions, collaborations, or deployment inquiries, please contact:
- π€ Hugging Face: https://huggingface.co/DimensionSTP
- βοΈ Email: [[email protected]]
π¦ Available Checkpoints
- β
main
: Final stable version from thelast
branch - β All training artifacts available (tokenizer, config, model weights)
- Downloads last month
- 32