Llama 3.1 405B Instruct beats GPT-4o on MixEval-Hard
Just ran MixEval for 405B, Sonnet-3.5 and 4o, with 405B landing right between the other two at 66.19
The GPT-4o result of 64.7 replicated locally but Sonnet-3.5 actually scored 70.25/69.45 in my replications π€ Still well ahead of the other 2 though.
Autonomous interaction with the computer has been a longstanding challenge with great potential, and the recent proliferation of large language models (LLMs) has markedly accelerated progress in building digital agents. However, most of these agents are designed to interact with a narrow domain, such as a specific software or website. This narrow focus constrains their applicability for general computer tasks. To this end, we introduce OS-Copilot, a framework to build generalist agents capable of interfacing with comprehensive elements in an operating system (OS), including the web, code terminals, files, multimedia, and various third-party applications. We use OS-Copilot to create FRIDAY, a self-improving embodied agent for automating general computer tasks. On GAIA, a general AI assistants benchmark, FRIDAY outperforms previous methods by 35%, showcasing strong generalization to unseen applications via accumulated skills from previous tasks. We also present numerical and quantitative evidence that FRIDAY learns to control and self-improve on Excel and Powerpoint with minimal supervision. Our OS-Copilot framework and empirical findings provide infrastructure and insights for future research toward more capable and general-purpose computer agents.
Fine-tuning LLMs is rad, but how do you manage all your checkpoints and evals in a production setting?
We partnered with @hamel to ship an Enterprise Model Management course packed full of learnings for those training, evaluating and deploying models at work.
Topics include: - What webhooks are & how to use them to create integrations with different tools - How to automate train -> eval runs - Improving model governance and documentation - Comparing candidate and baseline models - Design patterns & recipes - Lots more...
We released it last Thursday (free) and at just 30 minutes of content total, its very information-dense with non-stop learnings covering important concepts around LLM validation, making your approach to LLM prompting more pythonic and quickly covers a basic RAG application at the end.