Question about the chat template which ignores add_generation_prompt
#12 opened 17 days ago
by
xukp20

How much VRAM/RAM is required to load this model?
1
#11 opened 3 months ago
by
dpkirchner
Training time
#10 opened 3 months ago
by
iHaag
4-bit LLaDA model
#9 opened 4 months ago
by
chentianqi
Impressive work
#8 opened 4 months ago
by
Daemontatox

what part of the code is diffusion?
π
1
1
#6 opened 4 months ago
by
fblgit

Model performance
2
#5 opened 4 months ago
by
icoicqico

That is awesome!
2
#4 opened 4 months ago
by
owao
Anybody has been able to run their chat.py model on a Mac?
8
#3 opened 4 months ago
by
neodymion
Gguf?
π
8
7
#2 opened 4 months ago
by
AlgorithmicKing

Add library_name and pipeline_tag to model card
π
1
#1 opened 4 months ago
by
nielsr
