Enterprise Explorers

Enterprise
community
Activity Feed

AI & ML interests

Our team builds AI with open models and open source, collaborating privately with security and advanced access controls.

Recent Activity

Sylvestreย  updated a collection 6 days ago
test
Sylvestreย  updated a collection 6 days ago
test
View all activity

enterprise-explorers's activity

victorย 
posted an update 2 days ago
view post
Post
2340
Finally, an open-source AI that turns your lyrics into full songs is hereโ€”meet YuE! Unlike other tools that only create short clips, YuE can make entire songs (up to 5 minutes) with vocals, melody, and instruments all working together. Letsss go!

m-a-p/YuE-s1-7B-anneal-en-cot
MoritzLaurerย 
posted an update 15 days ago
view post
Post
2058
Microsoft's rStar-Math paper claims that ๐Ÿค ~7B models can match the math skills of o1 using clever train- and test-time techniques. You can now download their prompt templates from Hugging Face !

๐Ÿ“ The paper introduces rStar-Math, which claims to rival OpenAI o1's math reasoning capabilities by integrating Monte Carlo Tree Search (MCTS) with step-by-step verified reasoning trajectories.
๐Ÿค– A Process Preference Model (PPM) enables fine-grained evaluation of intermediate steps, improving training data quality.
๐Ÿงช The system underwent four rounds of self-evolution, progressively refining both the policy and reward models to tackle Olympiad-level math problemsโ€”without GPT-4-based data distillation.
๐Ÿ’พ While we wait for the release of code and datasets, you can already download the prompts they used from the HF Hub!

Details and links here ๐Ÿ‘‡
Prompt-templates docs: https://moritzlaurer.github.io/prompt_templates/
Templates on the hub: MoritzLaurer/rstar-math-prompts
Prompt-templates collection: MoritzLaurer/prompt-templates-6776aa0b0b8a923957920bb4
Paper: https://arxiv.org/pdf/2501.04519
pagezyhfย 
posted an update 17 days ago
MoritzLaurerย 
posted an update 19 days ago
view post
Post
3149
FACTS is a great paper from @GoogleDeepMind on measuring the factuality of LLM outputs. You can now download their prompt templates from @huggingface to improve LLM-based fact-checking yourself!

๐Ÿ“ The paper introduces the FACTS Grounding benchmark for evaluating the factuality of LLM outputs.

๐Ÿค– Fact-checking is automated by an ensemble of LLM judges that verify if a response is fully grounded in a factual reference document.

๐Ÿงช The authors tested different prompt templates on held-out data to ensure their generalization.

๐Ÿ“š It's highly educational to read these templates to learn how frontier labs design prompts and understand their limitations.

๐Ÿ’พ You can now download and reuse these prompt templates via the prompt-templates library!

๐Ÿ”„ The library simplifies sharing prompt templates on the HF hub or locally via standardized YAML files. Letโ€™s make LLM work more transparent and reproducible by sharing more templates like this!

Links ๐Ÿ‘‡
- prompt-templates docs: https://moritzlaurer.github.io/prompt_templates/
- all templates on the HF Hub: MoritzLaurer/facts-grounding-prompts
- FACTS paper: https://storage.googleapis.com/deepmind-media/FACTS/FACTS_grounding_paper.pdf
MoritzLaurerย 
posted an update 21 days ago
view post
Post
1705
The TRL v0.13 release is ๐Ÿ”ฅ! My highlight are the new process reward trainer to train models similar to o1 and tool call support:

๐Ÿง  Process reward trainer: Enables training of Process-supervised Reward Models (PRMs), which reward the quality of intermediate steps, promoting structured reasoning. Perfect for tasks like stepwise reasoning.

๐Ÿ”€ Model merging: A new callback leverages mergekit to merge models during training, improving performance by blending reference and policy models - optionally pushing merged models to the Hugging Face Hub.

๐Ÿ› ๏ธ Tool call support: TRL preprocessing now supports tool integration, laying the groundwork for agent fine-tuning with examples like dynamic temperature fetching in prompts.

โš–๏ธ Mixture of judges: The new AllTrueJudge combines decisions from multiple binary judges for more nuanced evaluation.

Read the release notes and other resources here ๐Ÿ‘‡
Release: https://github.com/huggingface/trl/releases/tag/v0.13.0
Mergekit: https://github.com/arcee-ai/mergekit
Mixture of judges paper: The Perfect Blend: Redefining RLHF with Mixture of Judges (2409.20370)
andrewrreedย 
posted an update 23 days ago
view post
Post
2657
๐Ÿš€ Supercharge your LLM apps with Langfuse on Hugging Face Spaces!

Langfuse brings end-to-end observability and tooling to accelerate your dev workflow from experiments through production

Now available as a Docker Space directly on the HF Hub! ๐Ÿค—

๐Ÿ” Trace everything: monitor LLM calls, retrieval, and agent actions with popular frameworks
1โƒฃ One-click deployment: on Spaces with persistent storage and integrated OAuth
๐Ÿ›  Simple Prompt Management: Version, edit, and update without redeployment
โœ… Intuitive Evals: Collect user feedback, run model/prompt evaluations, and improve quality
๐Ÿ“Š Dataset Creation: Build datasets directly from production data to enhance future performance

Kudos to the Langfuse team for this collab and the awesome, open-first product theyโ€™re building! ๐Ÿ‘ @marcklingen @Clemo @MJannik

๐Ÿ”— Space: langfuse/langfuse-template-space
๐Ÿ”— Docs: https://huggingface.co/docs/hub/spaces-sdks-docker-langfuse
  • 1 reply
ยท
MoritzLaurerย 
posted an update 23 days ago
view post
Post
2064
OpenAI is losing money on the $200/month subscription ๐Ÿคฏ. It's crazy how expensive it is to run these largest LLMs:

- ChatGPT Pro costs $200/month ($2,400/year) and is still unprofitable for OpenAI due to higher-than-expected usage.
- OpenAI reportedly expected losses of about $5 billion on revenue of $3.7 billion last year, with ChatGPT alone once costing an estimated $700,000 per day to operate. ๐Ÿ’ธ๐Ÿ”ฅ
- They build strong models and do great research. Whether this business model will work in the long run is one of the biggest questions in the AI economy today.

Source with the numbers ๐Ÿ‘‡
https://techcrunch.com/2025/01/05/openai-is-losing-money-on-its-pricey-chatgpt-pro-plan-ceo-sam-altman-says/
ยท
jeffboudierย 
posted an update 23 days ago
view post
Post
566
NVIDIA just announced the Cosmos World Foundation Models, available on the Hub: nvidia/cosmos-6751e884dc10e013a0a0d8e6

Cosmos is a family of pre-trained models purpose-built for generating physics-aware videos and world states to advance physical AI development.
The release includes Tokenizers nvidia/cosmos-tokenizer-672b93023add81b66a8ff8e6

Learn more in this great community article by @mingyuliutw and @PranjaliJoshi https://huggingface.co/blog/mingyuliutw/nvidia-cosmos
  • 1 reply
ยท
MoritzLaurerย 
posted an update 24 days ago
view post
Post
2208
๐Ÿš€ Releasing a new zeroshot-classifier based on ModernBERT! Some key takeaways:

- โšก Speed & efficiency: It's multiple times faster and uses significantly less memory than DeBERTav3. You can use larger batch sizes and enabling bf16 (instead of fp16) gave me a ~2x speed boost as well
- ๐Ÿ“‰ Performance tradeoff: It performs slightly worse than DeBERTav3 on average across my zeroshot classification task collection
- ๐Ÿง  Use cases: I recommend using it for scenarios requiring speed and a larger context window (8k).
- ๐Ÿ’ก Whatโ€™s next? Iโ€™m preparing a newer version trained on better + longer synthetic data to fully leverage the 8k context window and improve upon the training mix of my older zeroshot-v2.0 models. I also hope that there will be a multilingual variant in the future.

Great work by https://huggingface.co/answerdotai !

If youโ€™re looking for a high-speed zeroshot classifier, give it a try!

๐Ÿ“„ Resources below: ๐Ÿ‘‡
Base model: MoritzLaurer/ModernBERT-base-zeroshot-v2.0
Large model: MoritzLaurer/ModernBERT-large-zeroshot-v2.0
Updated zeroshot collection: MoritzLaurer/zeroshot-classifiers-6548b4ff407bb19ff5c3ad6f
ModernBERT collection with paper: answerdotai/modernbert-67627ad707a4acbf33c41deb
MoritzLaurerย 
posted an update about 1 month ago
view post
Post
2606
Quite excited by the ModernBERT release! 0.15/0.4B small, 2T modern pre-training data and tokenizer with code, 8k context window, great efficient model for embeddings & classification!

This will probably be the basis for many future SOTA encoders! And I can finally stop using DeBERTav3 from 2021 :D

Congrats @answerdotai , @LightOnIO and collaborators like @tomaarsen !

Paper and models here ๐Ÿ‘‡https://huggingface.co/collections/answerdotai/modernbert-67627ad707a4acbf33c41deb
ยท
MoritzLaurerย 
posted an update about 1 month ago
MoritzLaurerย 
posted an update about 2 months ago
view post
Post
1289
I've been building a small library for working with prompt templates on the HF hub: pip install prompt-templates. Motivation:

The community currently shares prompt templates in a wide variety of formats: in datasets, in model cards, as strings in .py files, as .txt/.yaml/.json/.jinja2 files etc. This makes sharing and working with prompt templates unnecessarily complicated.

Prompt templates are currently the main hyperparameter that people tune when building complex LLM systems or agents. If we don't have a common standard for sharing them, we cannot systematically test and improve our systems. After comparing different community approaches, I think that working with modular .yaml or .json files is the best approach.

The prompt-templates library :
- proposes a standard for sharing prompts (entirely locally or on the HF hub)
- provides some utilities that are interoperable with the broader ecosystem

Try it:
# !pip install prompt-templates
from prompt_templates import PromptTemplateLoader 
prompt_template = PromptTemplateLoader.from_hub(repo_id="MoritzLaurer/closed_system_prompts", filename="claude-3-5-artifacts-leak-210624.yaml")


The library is in early stages, feedback is welcome!

More details in the docs: https://github.com/MoritzLaurer/prompt_templates/
  • 1 reply
ยท
julien-cย 
posted an update about 2 months ago
view post
Post
8933
After some heated discussion ๐Ÿ”ฅ, we clarify our intent re. storage limits on the Hub

TL;DR:
- public storage is free, and (unless blatant abuse) unlimited. We do ask that you consider upgrading to PRO and/or Enterprise Hub if possible
- private storage is paid above a significant free tier (1TB if you have a paid account, 100GB otherwise)

docs: https://huggingface.co/docs/hub/storage-limits

We optimize our infrastructure continuously to scale our storage for the coming years of growth in Machine learning, to the benefit of the community ๐Ÿ”ฅ

cc: @reach-vb @pierric @victor and the HF team
ยท
pagezyhfย 
posted an update about 2 months ago
pagezyhfย 
posted an update about 2 months ago
view post
Post
974
Itโ€™s 2nd of December , hereโ€™s your Cyber Monday present ๐ŸŽ !

Weโ€™re cutting our price down on Hugging Face Inference Endpoints and Spaces!

Our folks at Google Cloud are treating us with a 40% price cut on GCP Nvidia A100 GPUs for the next 3๏ธโƒฃ months. We have other reductions on all instances ranging from 20 to 50%.

Sounds like the time to give Inference Endpoints a try? Get started today and find in our documentation the full pricing details.
https://ui.endpoints.huggingface.co/
https://huggingface.co/pricing
abhishekย 
posted an update 2 months ago
view post
Post
1977
๐ŸŽ‰ SUPER BLACK FRIDAY DEAL ๐ŸŽ‰

Train almost any model on a variety of tasks such as llm finetuning, text classification/regression, summarization, question answering, image classification/regression, object detection, tabular data, etc for FREE using AutoTrain locally. ๐Ÿ”ฅ
https://github.com/huggingface/autotrain-advanced
julien-cย 
posted an update 2 months ago
view post
Post
2962
wow ๐Ÿ˜ฎ

INTELLECT-1 is the first collaboratively trained 10 billion parameter language model trained from scratch on 1 trillion tokens of English text and code.

PrimeIntellect/INTELLECT-1-Instruct
victorย 
posted an update 2 months ago
view post
Post
2147
Qwen/QwQ-32B-Preview shows us the future (and it's going to be exciting)...

I tested it against some really challenging reasoning prompts and the results are amazing ๐Ÿคฏ.

Check this dataset for the results: victor/qwq-misguided-attention
  • 2 replies
ยท