request for ollama

#17
by ainz - opened

can you pls add it to ollama ty

ollama run deepseek-ai/DeepSeek-V3-0324
pulling manifest
Error: pull model manifest: file does not exist

https://huggingface.co/models?other=base_model:quantized:deepseek-ai/DeepSeek-V3-0324

ollama run hf.co/unsloth/DeepSeek-V3-0324-GGUF:Q2_K
ollama run hf.co/unsloth/DeepSeek-V3-0324-GGUF:Q2_K_XL
ollama run hf.co/unsloth/DeepSeek-V3-0324-GGUF:Q3_K_M
ollama run hf.co/unsloth/DeepSeek-V3-0324-GGUF:Q4_K_M
ollama run hf.co/unsloth/DeepSeek-V3-0324-GGUF:Q5_K_M
ollama run hf.co/unsloth/DeepSeek-V3-0324-GGUF:Q6_K
ollama run hf.co/unsloth/DeepSeek-V3-0324-GGUF:Q8_0


ollama run hf.co/MaziyarPanahi/DeepSeek-V3-0324-GGUF:Q2_K

ollama run hf.co/lmstudio-community/DeepSeek-V3-0324-GGUF:Q3_K_L
ollama run hf.co/lmstudio-community/DeepSeek-V3-0324-GGUF:Q4_K_M
ollama run hf.co/lmstudio-community/DeepSeek-V3-0324-GGUF:Q6_K
ollama run hf.co/lmstudio-community/DeepSeek-V3-0324-GGUF:Q8_0

ollama run hf.co/lmstudio-community/DeepSeek-V3-0324-GGUF:Q4_K_M

pulling manifest
Error: pull model manifest: 400: The specified repository contains sharded GGUF. Ollama does not support this yet. Follow this issue for more info: https://github.com/ollama/ollama/issues/5245

https://ollama.com/lordoliver/DeepSeek-V3-0324:671b-q8_0

ollama run lordoliver/DeepSeek-V3-0324:671b-q8_0

EDIT: This runs on my machine, but it keeps talking about Youtube scripts unprompted. Might be a personalized fine tune? An official version would be nice.
EDIT2: It works if you take the Modelfile from the original deepseek-v3 and use that. I think there was a custom prompt lordoliver is using. Here's how to do that (only works if you also have deepseek-v3:671b-q8_0 downloaded):

ollama stop lordoliver/DeepSeek-V3-0324:671b-q8_0
mkdir /data/DeepSeek-V3-0324
cd /data/DeepSeek-V3-0324
ollama show deepseek-v3:671b-q8_0 --modelfile > Modelfile.deepseek
ollama show lordoliver/DeepSeek-V3-0324:671b-q8_0 --modelfile > Modelfile.old
cp Modelfile.deepseek Modelfile
vim Modelfile.old
# copy the FROM line then open Modelfile in a new vim tab
tabe Modelfile
# paste the FROM line from Modelfile.old into Modelfile
ollama create DeepSeek-V3-0324:671b-q8_0 -f ./Modelfile
ollama run DeepSeek-V3-0324:671b-q8_0

EDIT3: It was just pointed out that 0324 is 685B and lordoliver's model is 671b, so is this really 0324? I asked the unsloth community here: https://huggingface.co/unsloth/DeepSeek-V3-0324-GGUF/discussions/8

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment