Update InternVL3_5 1B Q8 vs InternVL3_5 2B Q4.md
Browse files
InternVL3_5 1B Q8 vs InternVL3_5 2B Q4.md
CHANGED
@@ -30,6 +30,6 @@ llama-server --host 0.0.0.0 --port 8000 --no-mmap -c 32768 -ub 4096 --temp 0.6 -
|
|
30 |
| Psychology | 36.22% | 46.24% |
|
31 |
| Other | 22.62% | 32.68% |
|
32 |
|
33 |
-
### Scripts used:
|
34 |
- https://github.com/chigkim/Ollama-MMLU-Pro
|
35 |
- https://github.com/chigkim/openai-api-gpqa
|
|
|
30 |
| Psychology | 36.22% | 46.24% |
|
31 |
| Other | 22.62% | 32.68% |
|
32 |
|
33 |
+
### Scripts used (I used the default settings only changing the temperature to 0.6 and the top_p to 0.95):
|
34 |
- https://github.com/chigkim/Ollama-MMLU-Pro
|
35 |
- https://github.com/chigkim/openai-api-gpqa
|