This is a version of the basic HCX-SEED Vision 3B with the vision encoder removed, llamafied, and a function call prompt (Hermes) added. The weights are not modified.
vllm serve minpeter/HCX-SEED-FC-3B \
--enforce-eager --port 4000 --served-model-name base \
--enable-auto-tool-choice --tool-call-parser llama_hermes --tool-parser-plugin lh_tool_parser.py
bfcl generate --model base --test-category simple,parallel,multiple,parallel_multiple,irrelevance,multi_turn_base --num-threads 30 --allow-overwrite --exclude-state-log
bfcl evaluate --model base --test-category simple,parallel,multiple,parallel_multiple,irrelevance,multi_turn_base
Comparison Table of Test Results by Model (BFCL)
π¦ Model: base
π Running test: irrelevance
β
Test completed: irrelevance. π― Accuracy: 0.43333333333333335
π Running test: multi_turn_base
β
Test completed: multi_turn_base. π― Accuracy: 0.01
π Running test: parallel_multiple
β
Test completed: parallel_multiple. π― Accuracy: 0.475
π Running test: parallel
β
Test completed: parallel. π― Accuracy: 0.52
π Running test: simple
β
Test completed: simple. π― Accuracy: 0.7575
π Running test: multiple
β
Test completed: multiple. π― Accuracy: 0.765
Overview
HyperCLOVA-X-SEED-Vision-Instruct-3B-Llamafied is based on a model developed by NAVER that can understand and generate text. It demonstrates competitive performance on major benchmarks related to Korean language and culture. In addition, it supports a context length of up to 16k tokens, enabling it to handle a wide range of tasks.
Basic Information
- Model Architecture: Transformer-based architecture (Dense Model)
- Number of Parameters: 3.26B
- Input/Output Format: Text / Text (both input and output are in text format)
- Context Length: 16k
- Knowledge Cutoff Date: The model was trained on data prior to August 2024.
Training and Data
The training data for HyperCLOVA-X-SEED-Vision-Instruct-3B-Llamafied consists of diverse sources, including high-quality datasets. The training process was carried out in four main stages: Pretraining Stage 1, where the model learns from a large volume of documents; Pretraining Stage 2, which focuses on additional training with high-quality data; Rejection sampling Fine-Tuning (RFT), aimed at enhancing the modelβs knowledge across various domains and its complex reasoning abilities; and Supervised Fine-Tuning (SFT), which improves the modelβs instruction-following capabilities. Furthermore, due to the characteristics of smaller models, vulnerability to long-context handling was observed. To address this, reinforcement for long-context understanding was incorporated from the pretraining stages through to the SFT stage, enabling the model to stably support context lengths of up to 16k tokens.
Huggingface Usage Example
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("/path/to/ckpt")
tokenizer = AutoTokenizer.from_pretrained("/path/to/ckpt")
chat = [
{"role": "system", "content": "- AI μΈμ΄λͺ¨λΈμ μ΄λ¦μ \"CLOVA X\" μ΄λ©° λ€μ΄λ²μμ λ§λ€μλ€.\n- μ€λμ 2025λ
04μ 24μΌ(λͺ©)μ΄λ€."},
{"role": "user", "content": "μλ’°λ©κ±° λ°©μ μκ³Ό μμμνμ κ΄κ³λ₯Ό μ΅λν μμΈν μλ €μ€."},
]
inputs = tokenizer.apply_chat_template(chat, add_generation_prompt=True, return_dict=True, return_tensors="pt")
output_ids = model.generate(**inputs, max_length=1024, stop_strings=["<|endofturn|>", "<|stop|>"], tokenizer=tokenizer)
print(tokenizer.batch_decode(output_ids))
- Downloads last month
- 89