Saxo commited on
Commit
1b90b2a
·
verified ·
1 Parent(s): 807f21a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -0
README.md CHANGED
@@ -33,6 +33,22 @@ AI 와 빅데이터 분석 전문 기업인 Linkbricks의 데이터사이언티
33
  -Deepspeed Stage=3, rslora 및 BAdam Layer Mode 사용<br>
34
  -ollama run benedict/linkbricks-gemma2-korean:27b
35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  Dr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics, fine-tuned the gemma-2-27b-it base model with SFT->DPO using four H100-80Gs on KT-CLOUD.
37
  It is a Korean language model trained to handle complex Korean logic problems through Korean-Chinese-English-Japanese cross-training data and logical data, and Tokenizer uses the base model without word expansion.
38
 
 
33
  -Deepspeed Stage=3, rslora 및 BAdam Layer Mode 사용<br>
34
  -ollama run benedict/linkbricks-gemma2-korean:27b
35
 
36
+ <br><br>
37
+ Benchmark (Open Ko LLM Leader Board Season 2 : No. 1)<br>
38
+ Model : Saxo/Linkbricks-Horizon-AI-Korean-Gemma-2-sft-dpo-27B<br>
39
+ Average : 51.37<br>
40
+ Ko-GPQA : 25.25<br>
41
+ Ko-Winogrande : 68.27<br>
42
+ Ko-GSM8k : 70.96<br>
43
+ Ko-EQ Bench : 50.25<br>
44
+ Ko-IFEval : 49.84<br>
45
+ KorNAT-CKA : 34.59<br>
46
+ KorNAT-SVA : 48.42<br>
47
+ Ko-Harmlessness : 65.66<br>
48
+ Ko-Helpfulness : 49.12<br>
49
+
50
+ <br><br>
51
+
52
  Dr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics, fine-tuned the gemma-2-27b-it base model with SFT->DPO using four H100-80Gs on KT-CLOUD.
53
  It is a Korean language model trained to handle complex Korean logic problems through Korean-Chinese-English-Japanese cross-training data and logical data, and Tokenizer uses the base model without word expansion.
54