AI & ML interests

None defined yet.

Recent Activity

burtenshaw 
posted an update about 17 hours ago
view post
Post
547
Inference for generative ai models looks like a mine field, but there’s a simple protocol for picking the best inference:

🌍 95% of users >> If you’re using open (large) models and need fast online inference, then use Inference providers on auto mode, and let it choose the best provider for the model. https://huggingface.co/docs/inference-providers/index

👷 fine-tuners/ bespoke >> If you’ve got custom setups, use Inference Endpoints to define a configuration from AWS, Azure, GCP. https://endpoints.huggingface.co/

🦫 Locals >> If you’re trying to stretch everything you can out of a server or local machine, use Llama.cpp, Jan, LMStudio or vLLM. https://huggingface.co/settings/local-apps#local-apps

🪟 Browsers >> If you need open models running right here in the browser, use transformers.js. https://github.com/huggingface/transformers.js

Let me know what you’re using, and if you think it’s more complex than this.
burtenshaw 
posted an update 15 days ago
view post
Post
891
You don't need remote APIs for a coding copliot, or the MCP Course! Set up a fully local IDE with MCP integration using Continue. In this tutorial Continue guides you through setting it up.

This is what you need to do to take control of your copilot:

1. Get the Continue extension from the [VS Code marketplace](https://marketplace.visualstudio.com/items?itemName=Continue.continue) to serve as the AI coding assistant.

2. Serve the model with an OpenAI compatible server in Llama.cpp / LmStudio/ etc.

llama-server -hf unsloth/Devstral-Small-2505-GGUF:Q4_K_M

3. Create a .continue/models/llama-max.yaml file in your project to tell Continue how to use the local Ollama model.
name: Llama.cpp model
    version: 0.0.1
    schema: v1
    models:
      - provider: llama.cpp
        model: unsloth/Devstral-Small-2505-GGUF
        apiBase: http://localhost:8080
        defaultCompletionOptions:
          contextLength: 8192 
    # Adjust based on the model
        name: Llama.cpp Devstral-Small
        roles:
          - chat
          - edit


4. Create a .continue/mcpServers/playwright-mcp.yaml file to integrate a tool, like the Playwright browser automation tool, with your assistant.

name: Playwright mcpServer
    version: 0.0.1
    schema: v1
    mcpServers:
      - name: Browser search
        command: npx
        args:
          - "@playwright/mcp@latest"


Check out the full tutorial in the [the MCP course](https://huggingface.co/learn/mcp-course/unit2/continue-client)
  • 1 reply
·
burtenshaw 
posted an update 19 days ago
view post
Post
1582
Brand new MCP Course has units are out, and now it's getting REAL! We've collaborated with Anthropic to dive deep into production ready and autonomous agents using MCP

🔗 mcp-course

This is what the new material covers and includes:

- Use Claude Code to build an autonomous PR agent
- Integrate your agent with Slack and Github to integrate it with you Team
- Get certified on your use case and share with the community
- Build an autonomous PR cleanup agent on the Hugging Face hub and deploy it with spaces

The material goes deep into these problems and helps you to build applications that work. We’re super excited to see what you build with it.
hesamation 
posted an update 20 days ago
burtenshaw 
posted an update 20 days ago
view post
Post
1480
Super excited to release Autotrain MCP. This is an MCP server for training AI models, so you can use your AI tools to train your AI models 🤯.

🔗 burtenshaw/autotrain-mcp

Use this MCP server with tools like Claude Desktop, Cursor, VSCode, or Continue to do this:

- Define an ML problem like Image Classification, LLM fine-tuning, Text Classification, etc.
- The AI can retrieve models and datasets from the hub using the hub MCP.
- Training happens on a Hugging Face space, so no worries about hardware restraints.
- Models are pushed to the hub to be used inference tools like Llama.cpp, vLLM, MLX, etc.
- Built on top of the AutoTrain library, so it has full integration with transformers and other libraries.

Everything is still under active development, but I’m super excited to hear what people build, and I’m open to contributions!
  • 1 reply
·
cbensimon 
posted an update 21 days ago
view post
Post
3121
🚀 ZeroGPU now supports PyTorch native quantization via torchao

While it hasn’t been battle-tested yet, Int8WeightOnlyConfig is already working flawlessly in our tests.

Let us know if you run into any issues — and we’re excited to see what the community will build!

import spaces
from diffusers import FluxPipeline
from torchao.quantization.quant_api import Int8WeightOnlyConfig, quantize_

pipeline = FluxPipeline.from_pretrained(...).to('cuda')
quantize_(pipeline.transformer, Int8WeightOnlyConfig()) # Or any other component(s)

@spaces.GPU
def generate(prompt: str):
    return pipeline(prompt).images[0]
·
victor 
posted an update 22 days ago
view post
Post
2514
Open Source Avengers, Assemble! Ask an expert AI agent team to solve complex problems together 🔥

Consilium brings together multiple agents that debate and use live research (web, arXiv, SEC) to reach a consensus. You set the strategy, they find the answer.

Credit to @azettl for this awesome demo: Agents-MCP-Hackathon/consilium_mcp
  • 2 replies
·
shivance 
posted an update 29 days ago
view post
Post
1699
The AI Memory Layer Will Change Everything ‼️

Why do even the smartest AIs like OpenAI's o3 and GPT-4o, Google's Gemini and Anthropic's Claude forget?

In this blog we unpack this challenge and explore how building a real memory into AI will redefine personalization and agent capabilities!

https://fullstackagents.substack.com/p/forget-me-not-the-ai-memory-layer
hesamation 
posted an update about 1 month ago
view post
Post
2615
I really like how this seven-stage pipeline was laid out in the Ultimate Guide to Fine-Tuning book.

It gives an overview, then goes into detail for each stage, even providing best practices.

It’s 115 pages on arxiv, definitely worth a read.

Check it out: https://arxiv.org/abs/2408.13296
burtenshaw 
posted an update about 1 month ago
view post
Post
2625
MCP course is now LIVE! We just dropped quizzes, videos, and live streams to make it a fully interactive course:

🔗 join in now: mcp-course

- It’s still free!
- Video 1 walks you through onboarding to the course
- The first live session is next week!
- You can now get a certificate via exam app
- We improved and written material with interactive quizzes

If you’re studying MCP and want a live, interactive, visual, certified course, then join us on the hub!
Felguk 
posted an update about 1 month ago
view post
Post
2106
Where gone streamlit in huggingface?
·
dhuynh95 
posted an update about 1 month ago
view post
Post
449
🚀 Built an MVP this weekend of Screenshot to HTML to quickly turn screenshots of mocks, competitors or inspiration into a website using Gemini Flash!

🤗 Try it on Hugging Face Space for free here: dhuynh95/screenshot_to_html

🧠 You will need to get a Gemini API key, but little known fact: it’s free! Google has really shipped with Gemini 2.5 and the Flash model can be used for free. Great for experimentations.

In this demo, you can see how we can use AI to turn a screenshot of a website into a fully interactive static HTML page using Gemini.

🏴‍☠️ It was fun building it and to get back to weekend hacking. I tried many things for fun, such as using Gemini Flash to locate assets and recreate them but it was not very successful. Tried other models but the fact that Gemini Flash is both smart AND free is a game changer. It’s great for builders!
cbensimon 
posted an update about 2 months ago
view post
Post
5830
🚀 ZeroGPU medium size is now available as a power-user feature

Nothing too fancy for now—ZeroGPU Spaces still default to large (70GB VRAM)—but this paves the way for:
- 💰 size-based quotas / pricing (medium will offer significantly more usage than large)
- 🦣 the upcoming xlarge size (141GB VRAM)

You can as of now control GPU size via a Space variable. Accepted values:
- auto (future default)
- medium
- large (current default)

The auto mode checks total CUDA tensor size during startup:
- More than 30GB → large
- Otherwise → medium
·
burtenshaw 
posted an update about 2 months ago
view post
Post
3225
We're thrilled to announce the launch of our comprehensive Model Context Protocol (MCP) Course! This free program is designed to take learners from foundational understanding to practical application of MCP in AI.

Follow the course on the hub: mcp-course

In this course, you will:
📖 Study Model Context Protocol in theory, design, and practice.
🧑‍💻 Learn to use established MCP SDKs and frameworks.
💾 Share your projects and explore applications created by the community.
🏆 Participate in challenges and evaluate your MCP implementations.
🎓 Earn a certificate of completion.

At the end of this course, you'll understand how MCP works and how to build your own AI applications that leverage external data and tools using the latest MCP standards.
  • 1 reply
·
hesamation 
posted an update about 2 months ago