Quick Results: Testing Multilingual Capabilities via Code Generation from Ukrainian Competitive Programming Tasks

#87
by anon-researcher-ua - opened

Just ran gpt-oss-20b and gpt-oss-120b (medium reasoning effort) on UA-Code-Bench benchmark with 500 competitive-programming tasks in Ukrainian across five difficulty tiers. For a Python solution to count, it must compile and pass every hidden test on the Eolymp platform, exactly as in a real contest.

  • gpt-oss-20b: 203 / 500
  • gpt-oss-120b: 229 / 500

That puts the 20 B model just ahead of DeepSeek-R1 (196) and the 120 B model a bit below o4-mini (234).

The code itself looks solid, but both models never “think” in Ukrainian, so a small UA reasoning tuning is next on my list. Still, getting >200 accepted answers straight out of the box from open-weight model is awesome - big thanks to the OpenAI team for making these models available!

image.png

OpenAI org

Amazing result, thanks for sharing! Curious if you have a chance to run low and high?

Yep, I’ll give it a try, though it may take some time to generate and run the tests.
Low reasoning shouldn’t be an issue at all, but with high reasoning I’ve noticed the generation sometimes becomes unstable past a certain point (mostly repeating thoughts or reasoning without producing an answer). I’ve seen this with other reasoning models as well, so it probably just needs some sampling parameter tweaks.

Sign up or log in to comment