Mungert commited on
Commit
2087325
·
verified ·
0 Parent(s):

Super-squash history to reclaim storage

Browse files
Files changed (43) hide show
  1. .gitattributes +89 -0
  2. EXAONE-Deep-32B-F16-00001-of-00002.gguf +3 -0
  3. EXAONE-Deep-32B-F16-00002-of-00002.gguf +3 -0
  4. EXAONE-Deep-32B-bf16-q4_k.gguf +3 -0
  5. EXAONE-Deep-32B-bf16-q6_k.gguf +3 -0
  6. EXAONE-Deep-32B-bf16-q8_0.gguf +3 -0
  7. EXAONE-Deep-32B-bf16_q8_0.gguf +3 -0
  8. EXAONE-Deep-32B-f16-q4_k.gguf +3 -0
  9. EXAONE-Deep-32B-f16-q6_k.gguf +3 -0
  10. EXAONE-Deep-32B-f16-q8_0.gguf +3 -0
  11. EXAONE-Deep-32B-f16_q8_0.gguf +3 -0
  12. EXAONE-Deep-32B-iq1_m.gguf +3 -0
  13. EXAONE-Deep-32B-iq1_s.gguf +3 -0
  14. EXAONE-Deep-32B-iq2_m.gguf +3 -0
  15. EXAONE-Deep-32B-iq2_s.gguf +3 -0
  16. EXAONE-Deep-32B-iq2_xs.gguf +3 -0
  17. EXAONE-Deep-32B-iq2_xxs.gguf +3 -0
  18. EXAONE-Deep-32B-iq3_m.gguf +3 -0
  19. EXAONE-Deep-32B-iq3_s.gguf +3 -0
  20. EXAONE-Deep-32B-iq3_xs.gguf +3 -0
  21. EXAONE-Deep-32B-iq3_xxs.gguf +3 -0
  22. EXAONE-Deep-32B-iq4_nl.gguf +3 -0
  23. EXAONE-Deep-32B-iq4_xs.gguf +3 -0
  24. EXAONE-Deep-32B-q2_k_m.gguf +3 -0
  25. EXAONE-Deep-32B-q2_k_s.gguf +3 -0
  26. EXAONE-Deep-32B-q3_k_m.gguf +3 -0
  27. EXAONE-Deep-32B-q3_k_s.gguf +3 -0
  28. EXAONE-Deep-32B-q4_0.gguf +3 -0
  29. EXAONE-Deep-32B-q4_1.gguf +3 -0
  30. EXAONE-Deep-32B-q4_k_m.gguf +3 -0
  31. EXAONE-Deep-32B-q4_k_s.gguf +3 -0
  32. EXAONE-Deep-32B-q5_0.gguf +3 -0
  33. EXAONE-Deep-32B-q5_1.gguf +3 -0
  34. EXAONE-Deep-32B-q5_k_m.gguf +3 -0
  35. EXAONE-Deep-32B-q5_k_s.gguf +3 -0
  36. EXAONE-Deep-32B-q6_k_m.gguf +3 -0
  37. EXAONE-Deep-32B-q8_0.gguf +3 -0
  38. EXAONE-Deep-32B.imatrix +3 -0
  39. README.md +402 -0
  40. bf16/EXAONE-Deep-32B-bf16-00001-of-00002.gguf +3 -0
  41. bf16/EXAONE-Deep-32B-bf16-00002-of-00002.gguf +3 -0
  42. f16/EXAONE-Deep-32B-f16-00001-of-00002.gguf +3 -0
  43. f16/EXAONE-Deep-32B-f16-00002-of-00002.gguf +3 -0
.gitattributes ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ EXAONE-Deep-32B-F16-00001-of-00002.gguf filter=lfs diff=lfs merge=lfs -text
37
+ EXAONE-Deep-32B-F16-00002-of-00002.gguf filter=lfs diff=lfs merge=lfs -text
38
+ EXAONE-Deep-32B-f16-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
39
+ EXAONE-Deep-32B-bf16-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
40
+ EXAONE-Deep-32B-f16-q6_k.gguf filter=lfs diff=lfs merge=lfs -text
41
+ EXAONE-Deep-32B-bf16-q6_k.gguf filter=lfs diff=lfs merge=lfs -text
42
+ EXAONE-Deep-32B-f16-q4_k.gguf filter=lfs diff=lfs merge=lfs -text
43
+ EXAONE-Deep-32B-bf16-q4_k.gguf filter=lfs diff=lfs merge=lfs -text
44
+ EXAONE-Deep-32B-q2_k_l.gguf filter=lfs diff=lfs merge=lfs -text
45
+ EXAONE-Deep-32B-q3_k_l.gguf filter=lfs diff=lfs merge=lfs -text
46
+ EXAONE-Deep-32B-q4_k_l.gguf filter=lfs diff=lfs merge=lfs -text
47
+ EXAONE-Deep-32B-q5_k_l.gguf filter=lfs diff=lfs merge=lfs -text
48
+ EXAONE-Deep-32B-q6_k_l.gguf filter=lfs diff=lfs merge=lfs -text
49
+ EXAONE-Deep-32B-q3_k_m.gguf filter=lfs diff=lfs merge=lfs -text
50
+ EXAONE-Deep-32B-q3_k_s.gguf filter=lfs diff=lfs merge=lfs -text
51
+ EXAONE-Deep-32B-q4_k_m.gguf filter=lfs diff=lfs merge=lfs -text
52
+ EXAONE-Deep-32B-q4_k_s.gguf filter=lfs diff=lfs merge=lfs -text
53
+ EXAONE-Deep-32B-q5_k_s.gguf filter=lfs diff=lfs merge=lfs -text
54
+ EXAONE-Deep-32B-q5_k_m.gguf filter=lfs diff=lfs merge=lfs -text
55
+ EXAONE-Deep-32B-q6_k_m.gguf filter=lfs diff=lfs merge=lfs -text
56
+ EXAONE-Deep-32B-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
57
+ EXAONE-Deep-32B-iq4_xs.gguf filter=lfs diff=lfs merge=lfs -text
58
+ EXAONE-Deep-32B-iq3_xs.gguf filter=lfs diff=lfs merge=lfs -text
59
+ EXAONE-Deep-32B-iq4_nl.gguf filter=lfs diff=lfs merge=lfs -text
60
+ EXAONE-Deep-32B-q4_0.gguf filter=lfs diff=lfs merge=lfs -text
61
+ EXAONE-Deep-32B-q4_1.gguf filter=lfs diff=lfs merge=lfs -text
62
+ EXAONE-Deep-32B-q4_0_l.gguf filter=lfs diff=lfs merge=lfs -text
63
+ EXAONE-Deep-32B-q4_1_l.gguf filter=lfs diff=lfs merge=lfs -text
64
+ EXAONE-Deep-32B-q5_0.gguf filter=lfs diff=lfs merge=lfs -text
65
+ EXAONE-Deep-32B-q5_1.gguf filter=lfs diff=lfs merge=lfs -text
66
+ EXAONE-Deep-32B-q5_0_l.gguf filter=lfs diff=lfs merge=lfs -text
67
+ EXAONE-Deep-32B-q5_1_l.gguf filter=lfs diff=lfs merge=lfs -text
68
+ EXAONE-Deep-32B-iq2_xs.gguf filter=lfs diff=lfs merge=lfs -text
69
+ EXAONE-Deep-32B-iq2_xxs.gguf filter=lfs diff=lfs merge=lfs -text
70
+ EXAONE-Deep-32B-iq2_s.gguf filter=lfs diff=lfs merge=lfs -text
71
+ EXAONE-Deep-32B-iq2_m.gguf filter=lfs diff=lfs merge=lfs -text
72
+ EXAONE-Deep-32B-iq1_s.gguf filter=lfs diff=lfs merge=lfs -text
73
+ EXAONE-Deep-32B-iq1_m.gguf filter=lfs diff=lfs merge=lfs -text
74
+ EXAONE-Deep-32B-q2_k_s.gguf filter=lfs diff=lfs merge=lfs -text
75
+ EXAONE-Deep-32B-iq3_xxs.gguf filter=lfs diff=lfs merge=lfs -text
76
+ EXAONE-Deep-32B-iq3_s.gguf filter=lfs diff=lfs merge=lfs -text
77
+ EXAONE-Deep-32B-iq3_m.gguf filter=lfs diff=lfs merge=lfs -text
78
+ EXAONE-Deep-32B.imatrix filter=lfs diff=lfs merge=lfs -text
79
+ f16/EXAONE-Deep-32B-f16-00001-of-00002.gguf filter=lfs diff=lfs merge=lfs -text
80
+ f16/EXAONE-Deep-32B-f16-00002-of-00002.gguf filter=lfs diff=lfs merge=lfs -text
81
+ EXAONE-Deep-32B-f16_q8_0.gguf filter=lfs diff=lfs merge=lfs -text
82
+ EXAONE-Deep-32B-bf16_q8_0.gguf filter=lfs diff=lfs merge=lfs -text
83
+ EXAONE-Deep-32B-f16_q6_k.gguf filter=lfs diff=lfs merge=lfs -text
84
+ EXAONE-Deep-32B-bf16_q6_k.gguf filter=lfs diff=lfs merge=lfs -text
85
+ EXAONE-Deep-32B-f16_q4_k.gguf filter=lfs diff=lfs merge=lfs -text
86
+ EXAONE-Deep-32B-bf16_q4_k.gguf filter=lfs diff=lfs merge=lfs -text
87
+ EXAONE-Deep-32B-q2_k_m.gguf filter=lfs diff=lfs merge=lfs -text
88
+ bf16/EXAONE-Deep-32B-bf16-00001-of-00002.gguf filter=lfs diff=lfs merge=lfs -text
89
+ bf16/EXAONE-Deep-32B-bf16-00002-of-00002.gguf filter=lfs diff=lfs merge=lfs -text
EXAONE-Deep-32B-F16-00001-of-00002.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:efafb2970b4994b98fb2ef1e457bbf9b0399863f91516e8c7cf9a73d7c37f1f8
3
+ size 45902462976
EXAONE-Deep-32B-F16-00002-of-00002.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a13eef88c8edffb2112219b93be1b5565c64a375918d866a6ad80bd0dd4a5873
3
+ size 18109475904
EXAONE-Deep-32B-bf16-q4_k.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a8b139698afe45facc5743efae20ed1c0a46350e77f3247cfc3ab60a8b30aab6
3
+ size 20715908448
EXAONE-Deep-32B-bf16-q6_k.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58c14683c6f4840072320c65d7df71b79b50ddfc6fcbadfdba833b5dcc8c3476
3
+ size 27495935328
EXAONE-Deep-32B-bf16-q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:532efd547c1ac68b5f22e06e35d4dd88178497f93da0d359378feb4f74bae245
3
+ size 34992598080
EXAONE-Deep-32B-bf16_q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74886fce5e0c87440fdc228f04a6cb71feaeb9fac6c69b4480229bb32960a391
3
+ size 45294857472
EXAONE-Deep-32B-f16-q4_k.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4d9f84884898f822f6cf6662e0813a0ecbf2e1a719cafeedcc823972a2b04d2
3
+ size 20715908448
EXAONE-Deep-32B-f16-q6_k.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:41a1441b477907784590edb3899f4def268c76f6f296a15a3d8e35bd55b15be2
3
+ size 27495935328
EXAONE-Deep-32B-f16-q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05ff2d420e5ffa094b630223a1a5b235489a9fd734d929139080d8b0f6b2c5a7
3
+ size 34992598080
EXAONE-Deep-32B-f16_q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:542d73f19b318c90a784828f55dbdf3048b28f005bde297b16de567b05516881
3
+ size 45294857472
EXAONE-Deep-32B-iq1_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d38fbde6a9cf2e13076019c27011bbb6ce77d45a08c7a635570c9e859c652d51
3
+ size 9321389568
EXAONE-Deep-32B-iq1_s.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:576e6b786f2d7961c25f62e30bd559562200ee235a89cd5f42f68201a96e4bfb
3
+ size 8577494528
EXAONE-Deep-32B-iq2_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b19648eecb9864a4cc4cb849c1291e8b88bee2b076c5b28f96eecf5d5443aca8
3
+ size 11545169408
EXAONE-Deep-32B-iq2_s.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35e1607f3ca4b620df22b755bda12685af036fdcaa43aad2d1ea2349da5ad577
3
+ size 10946825728
EXAONE-Deep-32B-iq2_xs.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc78331fbcf6473c42a6f0dcbd089ee69dd54138d8206ab03ad68761cb058e6c
3
+ size 10583612928
EXAONE-Deep-32B-iq2_xxs.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5405ce894e9811f0e9135c147b4e04c2a73173a6bd3a6fa5059a4cdc60649988
3
+ size 9699470848
EXAONE-Deep-32B-iq3_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f79933011219ff4d4cb6c0eb5bfd4082d6a64a0adb54f8ac7f3e877464de7a74
3
+ size 15074897408
EXAONE-Deep-32B-iq3_s.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f46f6ba7e9740ca8b175f207313e2428d7a5d9b039de96b99aa358af3049d54d
3
+ size 15074897408
EXAONE-Deep-32B-iq3_xs.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b4de69607c533708a1571e0169c9777ab7d037dd01c4b6d5465161557cf254f
3
+ size 13509672448
EXAONE-Deep-32B-iq3_xxs.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37e26e94c982d666ff85fda1dcbaa21d8d6796d3ea0f28d4f2086bce42cc1136
3
+ size 13173472768
EXAONE-Deep-32B-iq4_nl.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf83e46951f9ef73fb1525cd7daeb610336bfedbfce453d9ab22ba9f53035cf2
3
+ size 18185399808
EXAONE-Deep-32B-iq4_xs.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61f700a7b93d635380ee9d20e32ad134f5137cba9759007ab0bd0ea78be16510
3
+ size 17212190208
EXAONE-Deep-32B-q2_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59188b24b53db8f2904b62c69c07880001225f7624362c3686910056d050d5c2
3
+ size 12078427648
EXAONE-Deep-32B-q2_k_s.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e29e3e72bca831557d9a795249fd3e2eee961713ffcb971bbf598259fe3ff2f7
3
+ size 11939163648
EXAONE-Deep-32B-q3_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:341bc83d42a57c719b3f2188c23de15c63d4481303ba3cba3d7a1b44b6ab1e36
3
+ size 15586078208
EXAONE-Deep-32B-q3_k_s.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1550208e4fd9aa990a958a0989b2cee7a49c366b2958ecf215a35d032986dc67
3
+ size 15446814208
EXAONE-Deep-32B-q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:640d4cbead8707d00ea10c11929a283e9b1370d40c29e3e4c5ab276357550493
3
+ size 18008288768
EXAONE-Deep-32B-q4_1.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e6cff4f1663cc88abd1c5bbd90811936d561a231dbc8d60e77a9f3cb1a760a68
3
+ size 20008447488
EXAONE-Deep-32B-q4_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:efc6c3870c7222158095fefbd88ab2776f3df7b13de1ee73c7abdd6d84311860
3
+ size 19254271488
EXAONE-Deep-32B-q4_k_s.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86df443c2cca960841f5cf4fe82dfd6b760895e45fb03e4ba5aff6027213f8e1
3
+ size 18596638208
EXAONE-Deep-32B-q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7820b73a67481300b2d824c3439ce3daf308be67684adf28ed479a8ba53ec63
3
+ size 22008606208
EXAONE-Deep-32B-q5_1.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a03f5688b10ab35496d2dc107f096458483533a98968fe005c616d8fe0ccf047
3
+ size 24008764928
EXAONE-Deep-32B-q5_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2de4361d9dc3c0b7063c0202408e0d489a4a16bf71566c83525828a75371fe3c
3
+ size 22814944768
EXAONE-Deep-32B-q5_k_s.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f3d0bddb6a772462adf0d5f4b8942b70a09ea926d840f357f7e97599290c27bc
3
+ size 22467132928
EXAONE-Deep-32B-q6_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77b436ec85d82b3d16dc3e7b40f7126bd9093f9a8afc82c5aea5bf7c6d779b09
3
+ size 26258943488
EXAONE-Deep-32B-q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:096d4957f1807c00f2b3d7ebbe3ae726ad5f3885b1fc54d6adc6b265638c901f
3
+ size 34009558272
EXAONE-Deep-32B.imatrix ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f996c9aabf42e958d5b71b981011be5e8bf217bb2f5e0977727c8b15d4e862b4
3
+ size 14891562
README.md ADDED
@@ -0,0 +1,402 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: LGAI-EXAONE/EXAONE-3.5-32B-Instruct
3
+ base_model_relation: finetune
4
+ license: other
5
+ license_name: exaone
6
+ license_link: LICENSE
7
+ language:
8
+ - en
9
+ - ko
10
+ tags:
11
+ - lg-ai
12
+ - exaone
13
+ - exaone-deep
14
+ pipeline_tag: text-generation
15
+ library_name: transformers
16
+ ---
17
+
18
+ # <span style="color: #7FFF7F;">EXAONE-Deep-32B GGUF Models</span>
19
+
20
+
21
+ ## <span style="color: #7F7FFF;">Model Generation Details</span>
22
+
23
+ This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`73e53dc8`](https://github.com/ggerganov/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277).
24
+
25
+
26
+
27
+
28
+
29
+ ---
30
+
31
+ ## <span style="color: #7FFF7F;">Quantization Beyond the IMatrix</span>
32
+
33
+ I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides.
34
+
35
+ In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the `--tensor-type` option in `llama.cpp` to manually "bump" important layers to higher precision. You can see the implementation here:
36
+ 👉 [Layer bumping with llama.cpp](https://github.com/Mungert69/GGUFModelBuilder/blob/main/model-converter/tensor_list_builder.py)
37
+
38
+ While this does increase model file size, it significantly improves precision for a given quantization level.
39
+
40
+ ### **I'd love your feedback—have you tried this? How does it perform for you?**
41
+
42
+
43
+
44
+
45
+ ---
46
+
47
+ <a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;">
48
+ Click here to get info on choosing the right GGUF model format
49
+ </a>
50
+
51
+ ---
52
+
53
+
54
+
55
+ <!--Begin Original Model Card-->
56
+
57
+
58
+ <p align="center">
59
+ <img src="assets/EXAONE_Symbol+BI_3d.png", width="300", style="margin: 40 auto;">
60
+ <br>
61
+
62
+ # EXAONE-Deep-32B
63
+
64
+ ## Introduction
65
+
66
+ We introduce EXAONE Deep, which exhibits superior capabilities in various reasoning tasks including math and coding benchmarks, ranging from 2.4B to 32B parameters developed and released by LG AI Research. Evaluation results show that 1) EXAONE Deep **2.4B** outperforms other models of comparable size, 2) EXAONE Deep **7.8B** outperforms not only open-weight models of comparable scale but also a proprietary reasoning model OpenAI o1-mini, and 3) EXAONE Deep **32B** demonstrates competitive performance against leading open-weight models.
67
+
68
+ For more details, please refer to our [documentation](https://arxiv.org/abs/2503.12524), [blog](https://www.lgresearch.ai/news/view?seq=543) and [GitHub](https://github.com/LG-AI-EXAONE/EXAONE-Deep).
69
+
70
+ <p align="center">
71
+ <img src="assets/exaone_deep_overall_performance.png", width="100%", style="margin: 40 auto;">
72
+
73
+ This repository contains the reasoning 32B language model with the following features:
74
+
75
+ - Number of Parameters (without embeddings): 30.95B
76
+ - Number of Layers: 64
77
+ - Number of Attention Heads: GQA with 40 Q-heads and 8 KV-heads
78
+ - Vocab Size: 102,400
79
+ - Context Length: 32,768 tokens
80
+
81
+ ## Quickstart
82
+
83
+ We recommend to use `transformers` v4.43.1 or later.
84
+
85
+ Here is the code snippet to run conversational inference with the model:
86
+
87
+ ```python
88
+ import torch
89
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer
90
+ from threading import Thread
91
+
92
+ model_name = "LGAI-EXAONE/EXAONE-Deep-32B"
93
+ streaming = True # choose the streaming option
94
+
95
+ model = AutoModelForCausalLM.from_pretrained(
96
+ model_name,
97
+ torch_dtype=torch.bfloat16,
98
+ trust_remote_code=True,
99
+ device_map="auto"
100
+ )
101
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
102
+
103
+ # Choose your prompt:
104
+ # Math example (AIME 2024)
105
+ prompt = r"""Let $x,y$ and $z$ be positive real numbers that satisfy the following system of equations:
106
+ \[\log_2\left({x \over yz}\right) = {1 \over 2}\]\[\log_2\left({y \over xz}\right) = {1 \over 3}\]\[\log_2\left({z \over xy}\right) = {1 \over 4}\]
107
+ Then the value of $\left|\log_2(x^4y^3z^2)\right|$ is $\tfrac{m}{n}$ where $m$ and $n$ are relatively prime positive integers. Find $m+n$.
108
+
109
+ Please reason step by step, and put your final answer within \boxed{}."""
110
+ # Korean MCQA example (CSAT Math 2025)
111
+ prompt = r"""Question : $a_1 = 2$인 수열 $\{a_n\}$과 $b_1 = 2$인 등차수열 $\{b_n\}$이 모든 자연수 $n$에 대하여\[\sum_{k=1}^{n} \frac{a_k}{b_{k+1}} = \frac{1}{2} n^2\]을 만족시킬 때, $\sum_{k=1}^{5} a_k$의 값을 구하여라.
112
+
113
+ Options :
114
+ A) 120
115
+ B) 125
116
+ C) 130
117
+ D) 135
118
+ E) 140
119
+
120
+ Please reason step by step, and you should write the correct option alphabet (A, B, C, D or E) within \\boxed{}."""
121
+
122
+ messages = [
123
+ {"role": "user", "content": prompt}
124
+ ]
125
+ input_ids = tokenizer.apply_chat_template(
126
+ messages,
127
+ tokenize=True,
128
+ add_generation_prompt=True,
129
+ return_tensors="pt"
130
+ )
131
+
132
+ if streaming:
133
+ streamer = TextIteratorStreamer(tokenizer)
134
+ thread = Thread(target=model.generate, kwargs=dict(
135
+ input_ids=input_ids.to("cuda"),
136
+ eos_token_id=tokenizer.eos_token_id,
137
+ max_new_tokens=32768,
138
+ do_sample=True,
139
+ temperature=0.6,
140
+ top_p=0.95,
141
+ streamer=streamer
142
+ ))
143
+ thread.start()
144
+
145
+ for text in streamer:
146
+ print(text, end="", flush=True)
147
+ else:
148
+ output = model.generate(
149
+ input_ids.to("cuda"),
150
+ eos_token_id=tokenizer.eos_token_id,
151
+ max_new_tokens=32768,
152
+ do_sample=True,
153
+ temperature=0.6,
154
+ top_p=0.95,
155
+ )
156
+ print(tokenizer.decode(output[0]))
157
+ ```
158
+
159
+ > ### Note
160
+ > The EXAONE Deep models are trained with an optimized configuration,
161
+ > so we recommend following the [Usage Guideline](#usage-guideline) section to achieve optimal performance.
162
+
163
+ ## Evaluation
164
+
165
+ The following table shows the evaluation results of reasoning tasks such as math and coding. The full evaluation results can be found in the [documentation](https://arxiv.org/abs/2503.12524).
166
+
167
+ <table>
168
+ <tr>
169
+ <th>Models</th>
170
+ <th>MATH-500 (pass@1)</th>
171
+ <th>AIME 2024 (pass@1 / cons@64)</th>
172
+ <th>AIME 2025 (pass@1 / cons@64)</th>
173
+ <th>CSAT Math 2025 (pass@1)</th>
174
+ <th>GPQA Diamond (pass@1)</th>
175
+ <th>Live Code Bench (pass@1)</th>
176
+ </tr>
177
+ <tr>
178
+ <td>EXAONE Deep 32B</td>
179
+ <td>95.7</td>
180
+ <td>72.1 / <strong>90.0</strong></td>
181
+ <td>65.8 / <strong>80.0</strong></td>
182
+ <td><strong>94.5</strong></td>
183
+ <td>66.1</td>
184
+ <td>59.5</td>
185
+ </tr>
186
+ <tr>
187
+ <td>DeepSeek-R1-Distill-Qwen-32B</td>
188
+ <td>94.3</td>
189
+ <td>72.6 / 83.3</td>
190
+ <td>55.2 / 73.3</td>
191
+ <td>84.1</td>
192
+ <td>62.1</td>
193
+ <td>57.2</td>
194
+ </tr>
195
+ <tr>
196
+ <td>QwQ-32B</td>
197
+ <td>95.5</td>
198
+ <td>79.5 / 86.7</td>
199
+ <td><strong>67.1</strong> / 76.7</td>
200
+ <td>94.4</td>
201
+ <td>63.3</td>
202
+ <td>63.4</td>
203
+ </tr>
204
+ <tr>
205
+ <td>DeepSeek-R1-Distill-Llama-70B</td>
206
+ <td>94.5</td>
207
+ <td>70.0 / 86.7</td>
208
+ <td>53.9 / 66.7</td>
209
+ <td>88.8</td>
210
+ <td>65.2</td>
211
+ <td>57.5</td>
212
+ </tr>
213
+ <tr>
214
+ <td>DeepSeek-R1 (671B)</td>
215
+ <td><strong>97.3</strong></td>
216
+ <td><strong>79.8</strong> / 86.7</td>
217
+ <td>66.8 / <strong>80.0</strong></td>
218
+ <td>89.9</td>
219
+ <td><strong>71.5</strong></td>
220
+ <td><strong>65.9</strong></td>
221
+ </tr>
222
+ <tr>
223
+ <th colspan="7" height="30px"></th>
224
+ </tr>
225
+ <tr>
226
+ <td>EXAONE Deep 7.8B</td>
227
+ <td><strong>94.8</strong></td>
228
+ <td><strong>70.0</strong> / <strong>83.3</strong></td>
229
+ <td><strong>59.6</strong> / <strong>76.7</strong></td>
230
+ <td><strong>89.9</strong></td>
231
+ <td><strong>62.6</strong></td>
232
+ <td><strong>55.2</strong></td>
233
+ </tr>
234
+ <tr>
235
+ <td>DeepSeek-R1-Distill-Qwen-7B</td>
236
+ <td>92.8</td>
237
+ <td>55.5 / <strong>83.3</strong></td>
238
+ <td>38.5 / 56.7</td>
239
+ <td>79.7</td>
240
+ <td>49.1</td>
241
+ <td>37.6</td>
242
+ </tr>
243
+ <tr>
244
+ <td>DeepSeek-R1-Distill-Llama-8B</td>
245
+ <td>89.1</td>
246
+ <td>50.4 / 80.0</td>
247
+ <td>33.6 / 53.3</td>
248
+ <td>74.1</td>
249
+ <td>49.0</td>
250
+ <td>39.6</td>
251
+ </tr>
252
+ <tr>
253
+ <td>OpenAI o1-mini</td>
254
+ <td>90.0</td>
255
+ <td>63.6 / 80.0</td>
256
+ <td>54.8 / 66.7</td>
257
+ <td>84.4</td>
258
+ <td>60.0</td>
259
+ <td>53.8</td>
260
+ </tr>
261
+ <tr>
262
+ <th colspan="7" height="30px"></th>
263
+ </tr>
264
+ <tr>
265
+ <td>EXAONE Deep 2.4B</td>
266
+ <td><strong>92.3</strong></td>
267
+ <td><strong>52.5</strong> / <strong>76.7</strong></td>
268
+ <td><strong>47.9</strong> / <strong>73.3</strong></td>
269
+ <td><strong>79.2</strong></td>
270
+ <td><strong>54.3</strong></td>
271
+ <td><strong>46.6</strong></td>
272
+ </tr>
273
+ <tr>
274
+ <td>DeepSeek-R1-Distill-Qwen-1.5B</td>
275
+ <td>83.9</td>
276
+ <td>28.9 / 52.7</td>
277
+ <td>23.9 / 36.7</td>
278
+ <td>65.6</td>
279
+ <td>33.8</td>
280
+ <td>16.9</td>
281
+ </tr>
282
+ </table>
283
+
284
+ ## Deployment
285
+
286
+ EXAONE Deep models can be inferred in the various frameworks, such as:
287
+ - `TensorRT-LLM`
288
+ - `vLLM`
289
+ - `SGLang`
290
+ - `llama.cpp`
291
+ - `Ollama`
292
+ - `LM-Studio`
293
+
294
+ Please refer to our [EXAONE Deep GitHub](https://github.com/LG-AI-EXAONE/EXAONE-Deep) for more details about the inference frameworks.
295
+
296
+ ## Quantization
297
+
298
+ We provide the pre-quantized EXAONE Deep models with **AWQ** and several quantization types in **GGUF** format. Please refer to our [EXAONE Deep collection](https://huggingface.co/collections/LGAI-EXAONE/exaone-deep-67d119918816ec6efa79a4aa) to find corresponding quantized models.
299
+
300
+ ## Usage Guideline
301
+
302
+ To achieve the expected performance, we recommend using the following configurations:
303
+
304
+ 1. Ensure the model starts with `<thought>\n` for reasoning steps. The model's output quality may be degraded when you omit it. You can easily apply this feature by using `tokenizer.apply_chat_template()` with `add_generation_prompt=True`. Please check the example code on [Quickstart](#quickstart) section.
305
+ 2. The reasoning steps of EXAONE Deep models enclosed by `<thought>\n...\n</thought>` usually have lots of tokens, so previous reasoning steps may be necessary to be removed in multi-turn situation. The provided tokenizer handles this automatically.
306
+ 3. Avoid using system prompt, and build the instruction on the user prompt.
307
+ 4. Additional instructions help the models reason more deeply, so that the models generate better output.
308
+ - For math problems, the instructions **"Please reason step by step, and put your final answer within \boxed{}."** are helpful.
309
+ - For more information on our evaluation setting including prompts, please refer to our [Documentation](https://arxiv.org/abs/2503.12524).
310
+ 5. In our evaluation, we use `temperature=0.6` and `top_p=0.95` for generation.
311
+ 6. When evaluating the models, it is recommended to test multiple times to assess the expected performance accurately.
312
+
313
+ ## Limitation
314
+
315
+ The EXAONE language model has certain limitations and may occasionally generate inappropriate responses. The language model generates responses based on the output probability of tokens, and it is determined during learning from training data. While we have made every effort to exclude personal, harmful, and biased information from the training data, some problematic content may still be included, potentially leading to undesirable responses. Please note that the text generated by EXAONE language model does not reflects the views of LG AI Research.
316
+
317
+ - Inappropriate answers may be generated, which contain personal, harmful or other inappropriate information.
318
+ - Biased responses may be generated, which are associated with age, gender, race, and so on.
319
+ - The generated responses rely heavily on statistics from the training data, which can result in the generation of
320
+ semantically or syntactically incorrect sentences.
321
+ - Since the model does not reflect the latest information, the responses may be false or contradictory.
322
+
323
+ LG AI Research strives to reduce potential risks that may arise from EXAONE language models. Users are not allowed
324
+ to engage in any malicious activities (e.g., keying in illegal information) that may induce the creation of inappropriate
325
+ outputs violating LG AI’s ethical principles when using EXAONE language models.
326
+
327
+ ## License
328
+
329
+ The model is licensed under [EXAONE AI Model License Agreement 1.1 - NC](./LICENSE)
330
+
331
+ ## Citation
332
+
333
+ ```
334
+ @article{exaone-deep,
335
+ title={EXAONE Deep: Reasoning Enhanced Language Models},
336
+ author={{LG AI Research}},
337
+ journal={arXiv preprint arXiv:2503.12524},
338
+ year={2025}
339
+ }
340
+ ```
341
+
342
+ ## Contact
343
+ LG AI Research Technical Support: [email protected]
344
+
345
+ <!--End Original Model Card-->
346
+
347
+ ---
348
+
349
+ # <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
350
+
351
+ Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**:
352
+
353
+ 👉 [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
354
+
355
+
356
+ The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder)
357
+
358
+ 💬 **How to test**:
359
+ Choose an **AI assistant type**:
360
+ - `TurboLLM` (GPT-4.1-mini)
361
+ - `HugLLM` (Hugginface Open-source models)
362
+ - `TestLLM` (Experimental CPU-only)
363
+
364
+ ### **What I’m Testing**
365
+ I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
366
+ - **Function calling** against live network services
367
+ - **How small can a model go** while still handling:
368
+ - Automated **Nmap security scans**
369
+ - **Quantum-readiness checks**
370
+ - **Network Monitoring tasks**
371
+
372
+ 🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
373
+ - ✅ **Zero-configuration setup**
374
+ - ⏳ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low.
375
+ - 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
376
+
377
+ ### **Other Assistants**
378
+ 🟢 **TurboLLM** – Uses **gpt-4.1-mini** :
379
+ - **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
380
+ - **Create custom cmd processors to run .net code on Quantum Network Monitor Agents**
381
+ - **Real-time network diagnostics and monitoring**
382
+ - **Security Audits**
383
+ - **Penetration testing** (Nmap/Metasploit)
384
+
385
+ 🔵 **HugLLM** – Latest Open-source models:
386
+ - 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
387
+
388
+ ### 💡 **Example commands you could test**:
389
+ 1. `"Give me info on my websites SSL certificate"`
390
+ 2. `"Check if my server is using quantum safe encyption for communication"`
391
+ 3. `"Run a comprehensive security audit on my server"`
392
+ 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution!
393
+
394
+ ### Final Word
395
+
396
+ I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
397
+
398
+ If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone.
399
+
400
+ I'm also open to job opportunities or sponsorship.
401
+
402
+ Thank you! 😊
bf16/EXAONE-Deep-32B-bf16-00001-of-00002.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f5952a9c6c8b993ca236d50bae1068d129fa76a27d221c164fbc7c9ddaf29b4d
3
+ size 45902462976
bf16/EXAONE-Deep-32B-bf16-00002-of-00002.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5830ede65540bd4c28c3e1e3f3c3c6175f805f9b62951b56e1f1abcc8c7c72b6
3
+ size 18109476096
f16/EXAONE-Deep-32B-f16-00001-of-00002.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:180f981cf6ad22233fc199e7c0b4b8f91a7001bebbcd7caaae0dbf2e1d7c26d9
3
+ size 45902462976
f16/EXAONE-Deep-32B-f16-00002-of-00002.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da70e4ea793e42919021b364134682ef3e2b89a934d2f7f077c89b7a21773e8f
3
+ size 18109476096