a. b.
AI & ML interests
Recent Activity
Organizations
usbphone's activity
Thanks @lysandre . Apologies if I sounded overly negative at all, it's a very good goal.
Since I doubt Llama.cpp would implement Transformers directly as VLM did (although that would be nice), perhaps an in-between project is possible using the low-level C bindings Llama provides. I believe that's how llama-cpp-python and similar work.
Then again I'm not sure how much benefit there is, but food for thought.
if a model architecture is supported by transformers, you can expect it to be supported in the rest of the ecosystem.
It's interesting, but it seems regardless of the TLDR, Llama.CPP maintains its own implementation and so won't benefit from any "day-0 support" for new architectures in Transfomers. I was kind of hoping Llama would get straight Transformer compatibility for a moment.
It's not clear what, if anything, is changing there.
Bartowski! 0.0!!!! You are on double-secret probation for this jinja error!
