InternLM

company
Activity Feed

AI & ML interests

None defined yet.

Recent Activity

internlm's activity

clefourrier 
posted an update 1 day ago
view post
Post
412
Saying Claude 4 is "the best coding model in the world" from the SWEBench scores is super misleading, and here is why:

If you look at the announcement table, their model has the best scores, but... if you look at the very bottom, in font 4, you'll see that the metric they report is actually not the same metric as the one used for the other models!


Comparing "pass@1 averaged 10 times" to "normal pass@1" is like grading one student by allowing them to take the test 10 times and averaging question scores, when the other students only get one chance at grading.

The first way to grade (avg@10) is actually quite good statistically, much better than what model creators usually report, because models tend to be quite inconsistent - sometimes good, sometimes bad...
But! You want to do it for all models then, and report with error bars.
The issue is that, if you do... well, it's going to be harder to say your model is the best, because the error bars will overlap between models, by a lot.

Also, you'll see that 2 numbers are reported: the first one is using avg@10 (what I explained above), and the second, highest one is using this plus many other tricks:
- test time compute (so having the model generate a tree of answers and selecting the best as you go, more or less)
- removing the times when the model breaks the tests
- and using another model to select the most promising solution!
You can't really say it's better than the rest, mostly because it's **way less efficient** to achieve a similar result.

It's honestly a bit sad because from user reports, the model sounds good - however, this announcement is overblown numbers wise, and I'm quite sure it's more a problem of "too much marketing" than of "bad science"

Another thing which makes the comparison invalid is the complete absence of open source from the report - don't think they are aware of DeepSeek/ Qwen/The new mistral for code/and all the cool specialised models found on the hub?
  • 1 reply
·
clefourrier 
posted an update 5 days ago
view post
Post
468
Always surprised that so few people actually read the FineTasks blog, on
✨how to select training evals with the highest signal✨

If you're serious about training models without wasting compute on shitty runs, you absolutely should read it!!

An high signal eval actually tells you precisely, during training, how wel & what your model is learning, allowing you to discard the bad runs/bad samplings/...!

The blog covers in depth prompt choice, metrics, dataset, across languages/capabilities, and my fave section is "which properties should evals have"👌
(to know on your use case how to select the best evals for you)

Blog: HuggingFaceFW/blogpost-fine-tasks
  • 2 replies
·
vansin 
posted an update 2 months ago
view post
Post
3475
🔥MedAgentBench Amazing Work🚀

Just explored #MedAgentBench from @Yale researchers and it's mind-blowing! They've created a cutting-edge benchmark that finally exposes the true capabilities of LLMs in complex medical reasoning.

⚡ Key discoveries:

DeepSeek R1 & OpenAI O3 dominate clinical reasoning tasks
Agent-based frameworks deliver exceptional performance-cost balance
Open-source alternatives are closing the gap at fraction of the cost

This work shatters previous benchmarks that failed to challenge today's advanced models.
The future of medical AI is here: https://github.com/gersteinlab/medagents-benchmark
#MedicalAI #MachineLearning #AIinHealthcare 🔥