not working
Well, obviously it won't work if llama.cpp never added support for T5 architecture.
· Sign up or log in to comment