Is There a Case for Conversation Optimized Tokenizers in Large Language Models? Paper • 2506.18674 • Published 3 days ago • 5