Text Generation
Transformers
Safetensors
cohere2
conversational

Tell me how do you feel about this model without telling me how do you feel about this model

#5
by MrDevolver - opened

I'll start...

img

111B is quite a good size, actually. It possible to run a good quant with decent context length on just four 24GB consumer-grade GPUs. And it is even a bit smaller than Mistral Large 123B and Pixtral Large 124B, which I use daily.

111B is quite a good size, actually. It possible to run a good quant with decent context length on just four 24GB consumer-grade GPUs. And it is even a bit smaller than Mistral Large 123B and Pixtral Large 124B, which I use daily.

Haha, yeah let me just grab the spare four 24GB consumer-grade GPUs from my garage, they would collect the dust otherwise... 🤣

C4AI Command A is an open weights research release

is cc-by-nc

Calling yet another cc-by-nc (except for the for-profit parent org) release “open weights” is a pretty good joke by co$here

C4AI Command A is an open weights research release

is cc-by-nc

Calling yet another cc-by-nc (except for the for-profit parent org) release “open weights” is a pretty good joke by co$here

"open weights research" whats your issue

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment