Saxo's picture
Update README.md
f70b281 verified
|
raw
history blame
3.04 kB
metadata
library_name: transformers
license: apache-2.0
base_model:
  - NousResearch/Hermes-3-Llama-3.1-8B
datasets:
  - Saxo/ko_cn_translation_tech_social_science_linkbricks_single_dataset
  - Saxo/ko_jp_translation_tech_social_science_linkbricks_single_dataset
  - >-
    Saxo/en_ko_translation_tech_science_linkbricks_single_dataset_with_prompt_text_huggingface
  - >-
    Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface
  - >-
    Saxo/ko_aspect_sentiment_sns_mall_sentiment_linkbricks_single_dataset_with_prompt_text_huggingface
  - Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface
  - >-
    Saxo/OpenOrca_cleaned_kor_linkbricks_single_dataset_with_prompt_text_huggingface
  - >-
    Saxo/ko_government_qa_total_linkbricks_single_dataset_with_prompt_text_huggingface_sampled
  - Saxo/ko-news-corpus-1
  - Saxo/ko-news-corpus-2
  - Saxo/ko-news-corpus-3
  - Saxo/ko-news-corpus-4
  - Saxo/ko-news-corpus-5
  - Saxo/ko-news-corpus-6
  - Saxo/ko-news-corpus-7
  - Saxo/ko-news-corpus-8
  - Saxo/ko-news-corpus-9
  - maywell/ko_Ultrafeedback_binarized
  - youjunhyeok/ko-orca-pair-and-ultrafeedback-dpo
  - lilacai/glaive-function-calling-v2-sharegpt
  - kuotient/gsm8k-ko
language:
  - ko
  - en
  - jp
  - cn
pipeline_tag: text-generation

Model Card for Model ID

AI 와 빅데이터 분석 전문 기업인 Linkbricks의 데이터사이언티스트인 지윤성(Saxo) 이사가
NousResearch/Hermes-3-Llama-3.1-8B 베이스모델을 사용해서 H100-80G 8개를 통해 CPT(Continue Pre Trainig) 한 한글 언어 모델
5천만건의 한글 뉴스 포함 다양한 한글 코퍼스를 기준으로 전체 파라미터중 약 50%를 재 튜닝한 한글 기본 모델로 SFT, DPO 를 통해 용도에 맞게 튜닝 가능한 기본 모델
-128k-Context Window
-한글 Function Call 및 Tool Calling 지원
-Deepspeed Stage=3, rslora 및 BAdam Layer Mode 사용


Dr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics
Korean language model CPT (Continue Pre Trainig) with 8 H100-80Gs using NousResearch/Hermes-3-Llama-3.1-8B base model
A basic Korean language model with about 50% of the total parameters re-tuned based on various Korean corpus including 50 million Korean news, which can be customized through SFT and DPO.

-Tokenizer uses the base model without word expansion
-Models enhanced with high-dimensional analysis of math and decision making
-Enhanced with COT(Chain of Thought) performance
-128k-Context Window
-Support for Korean Functioncall and Tool Calling
-Deepspeed Stage=3, use rslora and BAdam Layer Mode


www.linkbricks.com, www.linkbricks.vc