๐ New smolagents update: Safer Local Python Execution! ๐ฆพ๐
With the latest release, we've added security checks to the local Python interpreter: every evaluation is now analyzed for dangerous builtins, modules, and functions. ๐
Here's why this matters & what you need to know! ๐งต๐
1๏ธโฃ Why is local execution risky? โ ๏ธ AI agents that run arbitrary Python code can unintentionally (or maliciously) access system files, run unsafe commands, or exfiltrate data.
2๏ธโฃ New Safety Layer in smolagents ๐ก๏ธ We now inspect every return value during execution: โ Allowed: Safe built-in types (e.g., numbers, strings, lists) โ Blocked: Dangerous functions/modules (e.g., os.system, subprocess, exec, shutil)
4๏ธโฃ Security Disclaimer โ ๏ธ ๐จ Despite these improvements, local Python execution is NEVER 100% safe. ๐จ If you need true isolation, use a remote sandboxed executor like Docker or E2B.
5๏ธโฃ The Best Practice: Use Sandboxed Execution ๐ For production-grade AI agents, we strongly recommend running code in a Docker or E2B sandbox to ensure complete isolation.
6๏ธโฃ Upgrade Now & Stay Safe! ๐ Check out the latest smolagents release and start building safer AI agents today.
๐ Big news for AI agents! With the latest release of smolagents, you can now securely execute Python code in sandboxed Docker or E2B environments. ๐ฆพ๐
Here's why this is a game-changer for agent-based systems: ๐งต๐
1๏ธโฃ Security First ๐ Running AI agents in unrestricted Python environments is risky! With sandboxing, your agents are isolated, preventing unintended file access, network abuse, or system modifications.
2๏ธโฃ Deterministic & Reproducible Runs ๐ฆ By running agents in containerized environments, you ensure that every execution happens in a controlled and predictable settingโno more environment mismatches or dependency issues!
3๏ธโฃ Resource Control & Limits ๐ฆ Docker and E2B allow you to enforce CPU, memory, and execution time limits, so rogue or inefficient agents donโt spiral out of control.
4๏ธโฃ Safer Code Execution in Production ๐ญ Deploy AI agents confidently, knowing that any generated code runs in an ephemeral, isolated environment, protecting your host machine and infrastructure.
5๏ธโฃ Easy to Integrate ๐ ๏ธ With smolagents, you can simply configure your agent to use Docker or E2B as its execution backendโno need for complex security setups!
6๏ธโฃ Perfect for Autonomous AI Agents ๐ค If your AI agents generate and execute code dynamically, this is a must-have to avoid security pitfalls while enabling advanced automation.
๐ Introducing @huggingface Open Deep-Research๐ฅ
In just 24 hours, we built an open-source agent that: โ Autonomously browse the web โ Search, scroll & extract info โ Download & manipulate files โ Run calculations on data
๐จ How green is your model? ๐ฑ Introducing a new feature in the Comparator tool: Environmental Impact for responsible #LLM research! ๐ open-llm-leaderboard/comparator Now, you can not only compare models by performance, but also by their environmental footprint!
๐ The Comparator calculates COโ emissions during evaluation and shows key model characteristics: evaluation score, number of parameters, architecture, precision, type... ๐ ๏ธ Make informed decisions about your model's impact on the planet and join the movement towards greener AI!
๐ New feature of the Comparator of the ๐ค Open LLM Leaderboard: now compare models with their base versions & derivatives (finetunes, adapters, etc.). Perfect for tracking how adjustments affect performance & seeing innovations in action. Dive deeper into the leaderboard!
๐ ๏ธ Here's how to use it: 1. Select your model from the leaderboard. 2. Load its model tree. 3. Choose any base & derived models (adapters, finetunes, merges, quantizations) for comparison. 4. Press Load. See side-by-side performance metrics instantly!
Ready to dive in? ๐ Try the ๐ค Open LLM Leaderboard Comparator now! See how models stack up against their base versions and derivatives to understand fine-tuning and other adjustments. Easier model analysis for better insights! Check it out here: open-llm-leaderboard/comparator ๐
Dive into multi-model evaluations, pinpoint the best model for your needs, and explore insights across top open LLMs all in one place. Ready to level up your model comparison game?
๐จ Instruct-tuning impacts models differently across families! Qwen2.5-72B-Instruct excels on IFEval but struggles with MATH-Hard, while Llama-3.1-70B-Instruct avoids MATH performance loss! Why? Can they follow the format in examples? ๐ Compare models: open-llm-leaderboard/comparator