Big Deeper
BigDeeper
AI & ML interests
Differentiable hashing, orthonormal polynomial language modeling, image compression into language representations.
Recent Activity
new activity
12 days ago
unsloth/medgemma-27b-text-it:Says image-text to text
new activity
24 days ago
nvidia/parakeet-tdt-0.6b-v2:Does this model identifies speaker?
new activity
24 days ago
nvidia/parakeet-tdt-0.6b-v2:Is the model capable of splitting different speakers?
Organizations
None yet
BigDeeper's activity
Says image-text to text
#2 opened 12 days ago
by
BigDeeper
Does this model identifies speaker?
๐
1
8
#16 opened about 1 month ago
by
SouravAhmed

Is the model capable of splitting different speakers?
๐
1
1
#29 opened 24 days ago
by
BigDeeper
Very large RAM foot print.
4
#1 opened 5 months ago
by
BigDeeper
THE q8_0 version appears to go on and on indefinitely.
6
#1 opened 4 months ago
by
BigDeeper
Having a problem. Unable to find a suitable output format for 'video_out.mp4
#1 opened 5 months ago
by
BigDeeper
Any ideas how to mitigate this problem?
#3 opened 5 months ago
by
BigDeeper
Longer video?
6
#25 opened 6 months ago
by
BigDeeper
What minimal VRAM does it require?
12
#18 opened 7 months ago
by
DrNicefellow

VSCODE + Cline + Ollama + Qwen2.5-Coder-32B-Instruct.Q8_0
3
#20 opened 7 months ago
by
BigDeeper
comfyui does not recognize model files in sft format
๐
๐
4
5
#18 opened 10 months ago
by
peidong
Are there advantages or disadvantages in changing the format for translation?
3
#10 opened 11 months ago
by
BigDeeper
What does 120B really mean?
3
#1 opened about 1 year ago
by
BigDeeper
Does anyone know which specific Python library contains the tokenizer that was used to train Llama-3-70b?
๐
1
2
#11 opened about 1 year ago
by
BigDeeper
15 TeraTokens = 190 Million books
2
#4 opened about 1 year ago
by
Languido
Has anyone tried this gguf with agentic framework?
3
#6 opened about 1 year ago
by
BigDeeper
gguf
30
#24 opened about 1 year ago
by
LaferriereJC
How did you manage to produce gguf files, when llama.cpp/convert.py gives an error about the ROPE encoding?
4
#1 opened about 1 year ago
by
BigDeeper
Do they work with ollama? How was the conversion done for 128K, llama.cpp/convert.py complains about ROPE.
8
#2 opened about 1 year ago
by
BigDeeper