Morgan McGuire

morgan

AI & ML interests

None yet

Organizations

morgan's activity

posted an update 4 months ago
view post
Post
1296
Llama 3.1 405B Instruct beats GPT-4o on MixEval-Hard

Just ran MixEval for 405B, Sonnet-3.5 and 4o, with 405B landing right between the other two at 66.19

The GPT-4o result of 64.7 replicated locally but Sonnet-3.5 actually scored 70.25/69.45 in my replications πŸ€” Still well ahead of the other 2 though.

Sammple of 1 of the eval calls here: https://wandb.ai/morgan/MixEval/weave/calls/07b05ae2-2ef5-4525-98a6-c59963b76fe1

Quick auto-logging tracing for openai-compatible clients and many more here: https://wandb.github.io/weave/quickstart/

reacted to akhaliq's post with ❀️ 9 months ago
view post
Post
OS-Copilot

Towards Generalist Computer Agents with Self-Improvement

OS-Copilot: Towards Generalist Computer Agents with Self-Improvement (2402.07456)

Autonomous interaction with the computer has been a longstanding challenge with great potential, and the recent proliferation of large language models (LLMs) has markedly accelerated progress in building digital agents. However, most of these agents are designed to interact with a narrow domain, such as a specific software or website. This narrow focus constrains their applicability for general computer tasks. To this end, we introduce OS-Copilot, a framework to build generalist agents capable of interfacing with comprehensive elements in an operating system (OS), including the web, code terminals, files, multimedia, and various third-party applications. We use OS-Copilot to create FRIDAY, a self-improving embodied agent for automating general computer tasks. On GAIA, a general AI assistants benchmark, FRIDAY outperforms previous methods by 35%, showcasing strong generalization to unseen applications via accumulated skills from previous tasks. We also present numerical and quantitative evidence that FRIDAY learns to control and self-improve on Excel and Powerpoint with minimal supervision. Our OS-Copilot framework and empirical findings provide infrastructure and insights for future research toward more capable and general-purpose computer agents.
  • 2 replies
Β·
posted an update 9 months ago
view post
Post
Fine-tuning LLMs is rad, but how do you manage all your checkpoints and evals in a production setting?

We partnered with @hamel to ship an Enterprise Model Management course packed full of learnings for those training, evaluating and deploying models at work.

Topics include:
- What webhooks are & how to use them to create integrations with different tools
- How to automate train -> eval runs
- Improving model governance and documentation
- Comparing candidate and baseline models
- Design patterns & recipes
- Lots more...

Would love to hear what you think!

πŸ‘‰ https://www.wandb.courses/courses/enterprise-model-management
posted an update 10 months ago
view post
Post
Delighted to share a course I've learned a ton from about getting better outputs from LLMs

https://www.wandb.courses/courses/steering-language-models

We released it last Thursday (free) and at just 30 minutes of content total, its very information-dense with non-stop learnings covering important concepts around LLM validation, making your approach to LLM prompting more pythonic and quickly covers a basic RAG application at the end.

Would love to hear what ye think!