Thanks for not grossly overfitting this model.

#4
by phil111 - opened

Other recent models, including Qwen3 34b, focused their training on test boosting tokens like coding, math, and STEM, and are otherwise PROFOUNDLY ignorant when it comes to all other popular domains of knowledge, such as top movies, music, games, and TV shows.

For example, Qwen3 got the entire cast of the most watched Canadian TV show in history (Corner Gas) wrong, and most of the casts of some of the most watched TV shows globally (e.g. Two and a Half Men) wrong. Same goes for the most popular music, movies, games...

While this model did notably better than Qwen3 34b when it comes to broad knowledge. For example, getting about half the cast of Corner Gas correct and all of Two and a Half men correct. And even nearly the entire casts of less popular shows like Home Improvement correct.

This is based on a spot check at https://huggingface.co/spaces/tiiuae/Falcon-H1-playground. But the performance of this model across popular domains of English knowledge is clearly far higher than Qwen3 34b, so thanks again. Keep up the good work.

Technology Innovation Institute org

Thank you very much for your vibe checks! πŸš€

Sign up or log in to comment