AI & ML interests

None defined yet.

Recent Activity

jeffboudierย 
posted an update 13 days ago
view post
Post
2751
Quick 30s demo of the new Hub > Azure AI integration to deploy HF models in your own Azure account. Now with Py and CLI!

GG @alvarobartt @kramp @pagezyhf
IlyasMoutawwakilย 
posted an update about 1 month ago
view post
Post
3359
๐Ÿš€ Optimum: The Last v1 Release ๐Ÿš€
Optimum v1.27 marks the final major release in the v1 series. As we close this chapter, we're laying the groundwork for a more modular and community-driven future:
- Optimum v2: A lightweight core package for porting Transformers, Diffusers, or Sentence-Transformers to specialized AI hardware/software/accelerators..
- Optimumโ€‘ONNX: A dedicated package where the ONNX/ONNX Runtime ecosystem lives and evolves, faster-moving and decoupled from the Optimum core.

๐ŸŽฏ Why this matters:
- A clearer governance path for ONNX, fostering stronger community collaboration and improved developer experience..
- Enable innovation at a faster pace in a more modular, open-source environment.

๐Ÿ’ก What this means:
- More transparency, broader participation, and faster development driven by the community and key actors in the ONNX ecosystem (PyTorch, Microsoft, Joshua Lochner ๐Ÿ‘€, ...)
- A cleaner, more maintainable core Optimum, focused on extending HF libraries to special AI hardware/software/accelerators tooling and used by our partners (Intel Corporation, Amazon Web Services (AWS), AMD, NVIDIA, FuriosaAI, ...)

๐Ÿ› ๏ธ Major updates I worked on in this release:
โœ… Added support for Transformers v4.53 and SmolLM3 in ONNX/ONNXRuntime.
โœ… Solved batched inference/generation for all supported decoder model architectures (LLMs).

โœจ Big shoutout to @echarlaix for leading the refactoring work that cleanly separated ONNX exporter logic and enabled the creation of Optimumโ€‘ONNX.

๐Ÿ“ Release Notes: https://lnkd.in/gXtE_qji
๐Ÿ“ฆ Optimum : https://lnkd.in/ecAezNT6
๐ŸŽ Optimum-ONNX: https://lnkd.in/gzjyAjSi
#Optimum #ONNX #OpenSource #HuggingFace #Transformers #Diffusers
jeffboudierย 
posted an update 3 months ago
view post
Post
518
AMD summer hackathons are here!
A chance to get hands-on with MI300X GPUs and accelerate models.
๐Ÿ‡ซ๐Ÿ‡ท Paris - Station F - July 5-6
๐Ÿ‡ฎ๐Ÿ‡ณ Mumbai - July 12-13
๐Ÿ‡ฎ๐Ÿ‡ณ Bengaluru - July 19-20

Hugging Face and GPU Mode will be on site and on July 6 in Paris @ror will share lessons learned while building new kernels to accelerate Llama 3.1 405B on ROCm

Register to Paris event: https://lu.ma/fmvdjmur?tk=KeAbiP
All dates: https://lu.ma/calendar/cal-3sxhD5FdxWsMDIz
jeffboudierย 
posted an update 3 months ago
view post
Post
1699
Today we launched Training Cluster as a Service, to make the new DGX Cloud Lepton supercloud easily accessible to AI researchers.

Hugging Face will collaborate with NVIDIA to provision and set up GPU training clusters to make them available for the duration of training runs.

Hugging Face organizations can sign up here: https://huggingface.co/training-cluster
jeffboudierย 
posted an update 3 months ago
jeffboudierย 
posted an update 4 months ago
view post
Post
500
Wrapping up a week of shipping and announcements with Dell Enterprise Hub now featuring AI Applications, on-device models for AI PCs, a new CLI and Python SDK... all you need for building AI on premises!

Blog post has all the details: https://huggingface.co/blog/dell-ai-applications
regisssย 
posted an update 4 months ago
jeffboudierย 
posted an update 4 months ago
view post
Post
2601
Transcribing 1 hour of audio for less than $0.01 ๐Ÿคฏ

@mfuntowicz cooked with 8x faster Whisper speech recognition - whisper-large-v3-turbo transcribes at 100x real time on a $0.80/hr L4 GPU!

How they did it: https://huggingface.co/blog/fast-whisper-endpoints

1-click deploy with HF Inference Endpoints: https://endpoints.huggingface.co/new?repository=openai%2Fwhisper-large-v3-turbo&vendor=aws&region=us-east&accelerator=gpu&instance_id=aws-us-east-1-nvidia-l4-x1&task=automatic-speech-recognition&no_suggested_compute=true
jeffboudierย 
posted an update 4 months ago
jeffboudierย 
posted an update 5 months ago
view post
Post
2212
Llama4 is out and Scout is already on the Dell Enterprise Hub to deploy on Dell systems ๐Ÿ‘‰ dell.huggingface.co
jeffboudierย 
posted an update 5 months ago
view post
Post
1580
Enterprise orgs now enable serverless Inference Providers for all members
- includes $2 free usage per org member (e.g. an Enterprise org with 1,000 members share $2,000 free credit each month)
- admins can set a monthly spend limit for the entire org
- works today with Together, fal, Novita, Cerebras and HF Inference.

Here's the doc to bill Inference Providers usage to your org: https://huggingface.co/docs/inference-providers/pricing#organization-billing
  • 2 replies
ยท
regisssย 
posted an update 7 months ago
view post
Post
1770
Nice paper comparing the fp8 inference efficiency of Nvidia H100 and Intel Gaudi2: An Investigation of FP8 Across Accelerators for LLM Inference (2502.01070)

The conclusion is interesting: "Our findings highlight that the Gaudi 2, by leveraging FP8, achieves higher throughput-to-power efficiency during LLM inference"

One aspect of AI hardware accelerators that is often overlooked is how they consume less energy than GPUs. It's nice to see researchers starting carrying out experiments to measure this!

Gaudi3 results soon...
jeffboudierย 
posted an update 8 months ago
view post
Post
756
NVIDIA just announced the Cosmos World Foundation Models, available on the Hub: nvidia/cosmos-6751e884dc10e013a0a0d8e6

Cosmos is a family of pre-trained models purpose-built for generating physics-aware videos and world states to advance physical AI development.
The release includes Tokenizers nvidia/cosmos-tokenizer-672b93023add81b66a8ff8e6

Learn more in this great community article by @mingyuliutw and @PranjaliJoshi https://huggingface.co/blog/mingyuliutw/nvidia-cosmos
  • 1 reply
ยท
regisssย 
posted an update 9 months ago
jeffboudierย 
posted an update 10 months ago
regisssย 
posted an update 11 months ago
view post
Post
1431
Interested in performing inference with an ONNX model?โšก๏ธ

The Optimum docs about model inference with ONNX Runtime is now much clearer and simpler!

You want to deploy your favorite model on the hub but you don't know how to export it to the ONNX format? You can do it in one line of code as follows:
from optimum.onnxruntime import ORTModelForSequenceClassification

# Load the model from the hub and export it to the ONNX format
model_id = "distilbert-base-uncased-finetuned-sst-2-english"
model = ORTModelForSequenceClassification.from_pretrained(model_id, export=True)

Check out the whole guide ๐Ÿ‘‰ https://huggingface.co/docs/optimum/onnxruntime/usage_guides/models
jeffboudierย 
posted an update 11 months ago
jeffboudierย 
posted an update 12 months ago
view post
Post
480
Inference Endpoints got a bunch of cool updates yesterday, this is my top 3
jeffboudierย 
posted an update 12 months ago
view post
Post
4140
Pro Tip - if you're a Firefox user, you can set up Hugging Chat as integrated AI Assistant, with contextual links to summarize or simplify any text - handy!

In this short video I show how to set it up
ยท