Louis Brulรฉ Naudet's picture

Louis Brulรฉ Naudet PRO

louisbrulenaudet

AI & ML interests

Research in business taxation and development, University Dauphine-PSL ๐Ÿ“– | Backed by the Microsoft for Startups Hub program and Google Cloud Platform for startups program | Hugging Face for Legal ๐Ÿค—

Recent Activity

updated a dataset about 10 hours ago
louisbrulenaudet/bofip
updated a dataset about 10 hours ago
louisbrulenaudet/code-voirie-routiere
updated a dataset about 10 hours ago
louisbrulenaudet/code-travail
View all activity

Organizations

MISATO-dataset's profile picture OpenVINO Toolkit's profile picture ONNXConfig for all's profile picture Gradio-Themes-Party's profile picture scikit-learn's profile picture Open-Source AI Meetup's profile picture Universitรฉ Dauphine-PSL's profile picture Stable Diffusion Dreambooth Concepts Library's profile picture Blog-explorers's profile picture OpenOrca's profile picture OpenLLM France's profile picture huggingPartyParis's profile picture Qwen's profile picture That Time I got Reincarnated as a Hugging Face Organization's profile picture ZeroGPU Explorers's profile picture Journalists on Hugging Face's profile picture Major TOM's profile picture MLX Community's profile picture Lemone's profile picture Social Post Explorers's profile picture Cognitive Computations's profile picture C4AI Community's profile picture Haiku's profile picture Hugging Face for Legal's profile picture Hugging Face Discord Community's profile picture Dataset Tools's profile picture Data Is Better Together Contributor's profile picture

louisbrulenaudet's activity

reacted to m-ric's post with ๐Ÿš€ 2 days ago
view post
Post
2149
๐—ช๐—ฒ'๐˜ƒ๐—ฒ ๐—ท๐˜‚๐˜€๐˜ ๐—ฟ๐—ฒ๐—น๐—ฒ๐—ฎ๐˜€๐—ฒ๐—ฑ ๐˜€๐—บ๐—ผ๐—น๐—ฎ๐—ด๐—ฒ๐—ป๐˜๐˜€ ๐˜ƒ๐Ÿญ.๐Ÿฏ.๐Ÿฌ ๐Ÿš€, and it comes with a major feature: you can now log agent runs using OpenTelemetry to inspect them afterwards! ๐Ÿ“Š

This interactive format is IMO much easier to inspect big multi-step runs than endless console logs.

The setup is very easy, in a few lines of code.

Find a tutorial here ๐Ÿ‘‰ https://huggingface.co/docs/smolagents/tutorials/inspect_runs
  • 4 replies
ยท
reacted to MonsterMMORPG's post with ๐Ÿ”ฅ 4 days ago
view post
Post
4359
It is now possible to generate 16 Megapixel (4096x4096) raw images with SANA 4K model using under 8GB VRAM, 4 Megapixel (2048x2048) images using under 6GB VRAM, and 1 Megapixel (1024x1024) images using under 4GB VRAM thanks to new optimizations

13 January 2024 Update

Installers : https://www.patreon.com/posts/from-nvidia-labs-116474081

New 4K Tutorial Video : https://youtu.be/GjENQfHF4W8

Now the APP will use Diffusers Pipeline and it has huge VRAM optimizations

You need to reinstall

The models will be downloaded into your Hugging Face cache folder when you first time generate something

How to Get Installation Logs and How to Change Hugging Face Cache Folder :
https://www.patreon.com/posts/108419878

Please make a fresh install

When you enable all 4 optimizations the VRAM usages are like below

Make sure shared VRAM is enabled because initial loading of the model need more VRAM

Enable VAE Tiling + Enable VAE Slicing + Enable Model CPU Offload +
Enable Sequential CPU Offload

1K (1024x1024) : 4 GB GPUs
2K (2048x2048) : 6 GB GPUs
4K (4096x4096) : 8 GB GPUs

Still in any case may work on your GPU test it

Just Enable VAE Tiling + Enable Model CPU Offload works great in many cases

All below attached images are generated via SANA 4K model, they are RAW and their resolution is 5376x3072

Official repo page : https://github.com/NVlabs/Sana
  • 2 replies
ยท
reacted to anakin87's post with โค๏ธ about 1 month ago
view post
Post
1643
Tulu 3 SFT Mixture by AllenAI is a massive, good, multilingual dataset for fine-tuning Language Models.

Unfortunately, it was missing the "language" column.

I added it using the good old fastText.

Check out the dataset here ๐Ÿ‘‰ anakin87/tulu-3-sft-mixture-with-language

  • 1 reply
ยท
reacted to Jaward's post with ๐Ÿง  about 2 months ago
view post
Post
2428
Implements compute-efficient DeepPCR algorithm which parallelizes sequential operations thus speeding up inference and training of neural networks. DeepPCR can significantly reduce the time complexity in operations such as denoising in latent diffusion space from O(L) to O(log2 L).

Code: https://github.com/Jaykef/ai-algorithms/blob/main/deep_pcr.ipynb
reacted to prithivMLmods's post with ๐Ÿ”ฅ about 2 months ago
view post
Post
3293
HF Posts Receipts ๐Ÿ†๐Ÿš€

[ HF POSTS RECEIPT ] : prithivMLmods/HF-POSTS-RECEIPT

๐Ÿฅ The one thing that needs to be remembered is the 'username'.

๐Ÿฅ And yeah, thank you, @maxiw , for creating the awesome dataset and sharing them here! ๐Ÿ™Œ

๐Ÿฅ [ Dataset ] : maxiw/hf-posts

.
.
.
@prithivMLmods
reacted to clem's post with ๐Ÿš€ about 2 months ago
view post
Post
1985
I've been in Brazil for 10 days now ๐Ÿ‡ง๐Ÿ‡ท๐Ÿ‡ง๐Ÿ‡ท๐Ÿ‡ง๐Ÿ‡ท

I've been surprised by the gap between the massive number of people interested in AI (chatgpt adoption is crazy here) and the relatively low number of real AI builders - aka people and companies building their own AI models, datasets and apps.

Lots of efforts needed across the world for everyone to participate, control and benefit this foundational technology, starting with open-source & multi-lingual AI, more access to GPUs & AI builder training for all!
posted an update 2 months ago
view post
Post
1834
Iโ€™ve published a new dataset to simplify model merging ๐Ÿค—

This dataset facilitates the search for compatible architectures for model merging with @arcee_aiโ€™s mergekit, streamlining the automation of high-performance merge searches ๐Ÿ“–

Dataset : louisbrulenaudet/mergekit-configs
  • 1 reply
ยท
reacted to m-ric's post with ๐Ÿ”ฅ 2 months ago
view post
Post
3181
๐—ค๐˜„๐—ฒ๐—ป๐Ÿฎ.๐Ÿฑ-๐—–๐—ผ๐—ฑ๐—ฒ๐—ฟ-๐Ÿฏ๐Ÿฎ๐—•: ๐—ป๐—ฒ๐˜„ ๐—ฏ๐—ฒ๐˜€๐˜-๐—ถ๐—ป-๐—ฐ๐—น๐—ฎ๐˜€๐˜€ ๐—ผ๐—ฝ๐—ฒ๐—ป ๐—ฐ๐—ผ๐—ฑ๐—ถ๐—ป๐—ด ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น, ๐—ฏ๐—ฒ๐—ฎ๐˜๐˜€ ๐—š๐—ฃ๐—ง-๐Ÿฐ๐—ผ ๐—ผ๐—ป ๐—บ๐—ผ๐˜€๐˜ ๐—ฐ๐—ผ๐—ฑ๐—ถ๐—ป๐—ด ๐—ฏ๐—ฒ๐—ป๐—ฐ๐—ต๐—บ๐—ฎ๐—ฟ๐—ธ๐˜€!๐Ÿ’ฅ

๐Ÿ’ช It's the first time Open-Source coding model of this size class that clearly matches GPT-4o's coding capabilities!

โœจ Completes the previous two Qwen 2.5 Coder release with 4 new size: 0.5B, 3B, 14B, 32B
๐Ÿ“š Support long context up to 128K (for the 14B and 32B models)
โœ… Drop-in replacement to GPT-4o as a coding assistant on Cursor or for Artifacts!
๐Ÿค— Models available right now on the Hub, under Apache 2.0 license!

They have setup a crazy Artifacts demo, you should go have a look!
๐Ÿ‘‰ Qwen/Qwen2.5-Coder-Artifacts
reacted to m-ric's post with ๐Ÿ‘€ 2 months ago
view post
Post
2375
A non-Instruct LLM assistant is mostly useless. ๐Ÿง

Since it's mostly a model trained to complete text, when you ask it a question like "What to do during a stopover in Paris?", it can just go on and on adding more details to your question instead of answering, which would be valid to complete text from its training corpus, but not to answer questions.

โžก๏ธ So the post-training stage includes an important Instruction tuning step where you teach your model how to be useful : answer questions, be concise, be polite... RLHF is a well known technique for this.

For people interested to understand how this step works, the folks at Adaptive ML have made a great guide!

Read it here ๐Ÿ‘‰ https://www.adaptive-ml.com/post/from-zero-to-ppo
reacted to prithivMLmods's post with ๐Ÿค 2 months ago
view post
Post
5756
New Style, New Mix, New Drop ๐Ÿงค

๐ŸงจFlux LoRA DLC: prithivMLmods/FLUX-LoRA-DLC

๐ŸŽ†Glowing-Body: prithivMLmods/Glowing-Body-Flux-LoRA
๐ŸŽ†Electric-Blue: prithivMLmods/Electric-Blue-Flux-LoRA
๐ŸŽ†Intense-Red: prithivMLmods/Intense-Red-Flux-LoRA
๐ŸŽ†Clouds-Illusion: prithivMLmods/Clouds-Illusion-Flux-LoRA
๐ŸŽ†Digital-Yellow: prithivMLmods/Digital-Yellow-Flux-LoRA

๐ŸงจFlux LoRA Collection: prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be

.
.
.
@prithivMLmods
reacted to m-ric's post with ๐Ÿš€ 2 months ago
view post
Post
1636
๐—”๐—ป๐—ฑ๐—ฟ๐—ผ๐—ถ๐—ฑ๐—Ÿ๐—ฎ๐—ฏ: ๐—™๐—ถ๐—ฟ๐˜€๐˜ ๐—ฒ๐˜ƒ๐—ฒ๐—ฟ ๐˜€๐˜†๐˜€๐˜๐—ฒ๐—บ๐—ฎ๐˜๐—ถ๐—ฐ ๐—ฏ๐—ฒ๐—ป๐—ฐ๐—ต๐—บ๐—ฎ๐—ฟ๐—ธ ๐—ณ๐—ผ๐—ฟ ๐—”๐—ป๐—ฑ๐—ฟ๐—ผ๐—ถ๐—ฑ ๐—บ๐—ผ๐—ฏ๐—ถ๐—น๐—ฒ ๐—ฎ๐—ด๐—ฒ๐—ป๐˜๐˜€ ๐˜€๐—ต๐—ผ๐˜„๐˜€ ๐˜๐—ต๐—ฎ๐˜ ๐˜€๐—บ๐—ฎ๐—น๐—น, ๐—ณ๐—ถ๐—ป๐—ฒ-๐˜๐˜‚๐—ป๐—ฒ๐—ฑ ๐—ผ๐—ฝ๐—ฒ๐—ป ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น๐˜€ ๐—ฐ๐—ฎ๐—ป ๐—ฝ๐—ผ๐˜„๐—ฒ๐—ฟ ๐—ฎ ๐—๐—”๐—ฅ๐—ฉ๐—œ๐—ฆ ๐˜€๐˜†๐˜€๐˜๐—ฒ๐—บ ๐—ผ๐—ป ๐˜†๐—ผ๐˜‚๐—ฟ ๐˜€๐—บ๐—ฎ๐—ฟ๐˜๐—ฝ๐—ต๐—ผ๐—ป๐—ฒ ๐Ÿ“ฑ๐Ÿ”ฅ

A team from Tsinghua University just released AndroidLab, the first systematic framework to evaluate and train Android mobile agents that works with both text-only and multimodal models.

They show that fine-tuning small open-source models can significantly boost performance, matching that of much bigger closed models like GPT-4o.

The team built:

๐Ÿ“Šย A reproducible benchmark with 138 tasks across 9 apps to evaluate mobile agents systematically

๐Ÿ“๐Ÿ“ฑย A framework supporting both text-only (via XML) and visual (via marked screenshots) interfaces

โœ…ย An instruction dataset of 10.5k operation traces for training mobile agents

Key insights:

- ๐Ÿ“ˆ Fine-tuning improves performance BY A LOT: Open-source model Llama-3.1-8B improves from 2% to 24% success rate after training, nearly reaching GPT-4o performance although itโ€™s much smaller
- โš™๏ธ Text-only agents match multimodal ones: XML-based agents achieve similar performance to screenshot-based multimodal agents.

Read their paper here ๐Ÿ‘‰ AndroidLab: Training and Systematic Benchmarking of Android Autonomous Agents (2410.24024)
reacted to abhishek's post with ๐Ÿ”ฅ 2 months ago
view post
Post
5680
INTRODUCING Hugging Face AutoTrain Client ๐Ÿ”ฅ
Fine-tuning models got even easier!!!!
Now you can fine-tune SOTA models on all compatible dataset-model pairs on Hugging Face Hub using Python on Hugging Face Servers. Choose from a number of GPU flavors, millions of models and dataset pairs and 10+ tasks ๐Ÿค—

To try, install autotrain-advanced using pip. You can ignore dependencies and install without --no-deps and then you'd need to install some dependencies by hand.

"pip install autotrain-advanced"

Github repo: https://github.com/huggingface/autotrain-advanced
  • 6 replies
ยท
reacted to prithivMLmods's post with โค๏ธ 2 months ago
view post
Post
4910
Style flo : : ๐ŸŽ‰๐Ÿค—

{ Try Now on Flux LoRA DLC โ›ต } : prithivMLmods/FLUX-LoRA-DLC

-- Undersea
{ Red Fluid } : prithivMLmods/Red-Undersea-Flux-LoRA

-- 3D Realmix
{ 3D Portrait Render } : prithivMLmods/3D-Render-Flux-LoRA

-- Pop
{ Yellow Pop } : prithivMLmods/Yellow-Pop-Flux-Dev-LoRA

-- Grid
{ Purple Grid } : prithivMLmods/Purple-Grid-Flux-LoRA

{ collections : : }

๐Ÿš€ Flux LoRA :
prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be

๐Ÿš€Collection zero: prithivMLmods/collection-zero-and-demo-recently-updated-65e48a7dd8212873836ceca2


.
.
@prithivMLmods ๐Ÿงจ
reacted to yagilb's post with ๐Ÿ‘€ 2 months ago
reacted to singhsidhukuldeep's post with ๐Ÿ‘€ 2 months ago
view post
Post
2103
Exciting Research Alert: Revolutionizing Dense Passage Retrieval with Entailment Tuning!

The good folks at HKUST have developed a novel approach that significantly improves information retrieval by leveraging natural language inference.

The entailment tuning approach consists of several key steps to enhance dense passage retrieval performance.

Data Preparation
- Convert questions into existence claims using rule-based transformations.
- Combine retrieval data with NLI data from SNLI and MNLI datasets.
- Unify the format of both data types using a consistent prompting framework.

Entailment Tuning Process
- Initialize the model using pre-trained language models like BERT or RoBERTa.
- Apply aggressive masking (ฮฒ=0.8) specifically to the hypothesis components while preserving premise information.
- Train the model to predict the masked hypothesis tokens from the premise content.
- Run the training for 10 epochs using 8 GPUs, taking approximately 1.5-3.5 hours.

Training Arguments for Entailment Tuning (Yes! They Shared Them)
- Use a learning rate of 2e-5 with 100 warmup steps.
- Set batch size to 128.
- Apply weight decay of 0.01.
- Utilize the Adam optimizer with beta values (0.9, 0.999).
- Maintain maximum gradient norm at 1.0.

Deployment
- Index passages using FAISS for efficient retrieval.
- Shard vector store across multiple GPUs.
- Enable sub-millisecond retrieval of the top-100 passages per query.

Integration with Existing Systems
- Insert entailment tuning between pre-training and fine-tuning stages.
- Maintain compatibility with current dense retrieval methods.
- Preserve existing contrastive learning approaches during fine-tuning.

Simple, intuitive, and effective!

This advancement significantly improves the quality of retrieved passages for question-answering systems and retrieval-augmented generation tasks.
reacted to reach-vb's post with ๐Ÿš€ 3 months ago
view post
Post
3001
Smol models ftw! AMD released AMD OLMo 1B - beats OpenELM, tiny llama on MT Bench, Alpaca Eval - Apache 2.0 licensed ๐Ÿ”ฅ

> Trained with 1.3 trillion (dolma 1.7) tokens on 16 nodes, each with 4 MI250 GPUs

> Three checkpoints:

- AMD OLMo 1B: Pre-trained model
- AMD OLMo 1B SFT: Supervised fine-tuned on Tulu V2, OpenHermes-2.5, WebInstructSub, and Code-Feedback datasets
- AMD OLMo 1B SFT DPO: Aligned with human preferences using Direct Preference Optimization (DPO) on UltraFeedback dataset

Key Insights:
> Pre-trained with less than half the tokens of OLMo-1B
> Post-training steps include two-phase SFT and DPO alignment
> Data for SFT:
- Phase 1: Tulu V2
- Phase 2: OpenHermes-2.5, WebInstructSub, and Code-Feedback

> Model checkpoints on the Hub & Integrated with Transformers โšก๏ธ

Congratulations & kudos to AMD on a brilliant smol model release! ๐Ÿค—

amd/amd-olmo-6723e7d04a49116d8ec95070
replied to their post 3 months ago
view reply

Hello,

Thank you for reaching out. I'm interested in learning more about its potential applications and dataset specifics. To ensure weโ€™re aligned on objectives and timelines, would you mind detailing a bit further on the following in the Tally form? (https://tally.so/r/w2xe0A)

  • Project Goals: What are the primary objectives for your model, and how do you envision deploying it?
  • Data and Compute Requirements: Could you outline the volume and nature of data you'd like to process and any specific requirements for H100 access?
  • Finetuning Method: I'd be interested to hear more about your finetuning approach. Do you have a plan for iterations or specific benchmarks in mind?

Please submit your responses via the form to streamline our discussion. Once we have the foundational details clarified, we can determine the next steps and see how best to leverage the Azure credits together.

Looking forward to exploring the possibilities.

Best regards, Louis

replied to their post 3 months ago
view reply

Hello @Siddartha10 ,

Thank you for reaching out! I'm excited to hear about your work and the potential for collaboration.

To help assess how best to support your project, could you please share a bit more detail? Specifically:

  • Project Overview: A brief description of your project and its objectives.
  • Data Preparedness: Whether your data is ready for immediate use and the nature of this data.
  • Expected Outcomes: The goals or deliverables you anticipate achieving with this additional compute power.

Feel free to submit your details via this form Tally form (https://tally.so/r/w2xe0A) so we can proceed efficiently.

Looking forward to learning more about your project and potentially collaborating!

Best regards,
Louis

replied to their post 3 months ago
view reply

Hi @Pankaj8922 ,

Thank you for reaching out and sharing your project concept! For this collaboration, I'm specifically seeking projects that already have data prepared and ready for immediate use, as the Azure credits are limited and focused on applications that can be initiated without additional data generation steps.

If you have any projects with data fully prepared, feel free to submit details through the form here: https://tally.so/r/w2xe0A.

Best of luck with your synthetic dataset project!

posted an update 3 months ago
view post
Post
1221
Introducing Lemone-router, a series of classification models designed to produce an optimal multi-agent system for different branches of tax law.

Trained on a base of 49k lines comprising a set of synthetic questions generated by GPT-4 Turbo and Llama 3.1 70B, which have been further refined through evol-instruction tuning and manual curation and authority documents, these models are based on an 8-category decomposition of the classification scheme derived from the Bulletin officiel des finances publiques - impรดts :

label2id = {
    "Bรฉnรฉfices professionnels": 0,
    "Contrรดle et contentieux": 1,
    "Dispositifs transversaux": 2,
    "Fiscalitรฉ des entreprises": 3,
    "Patrimoine et enregistrement": 4,
    "Revenus particuliers": 5,
    "Revenus patrimoniaux": 6,
    "Taxes sur la consommation": 7
}
	
id2label = {
    0: "Bรฉnรฉfices professionnels",
    1: "Contrรดle et contentieux",
    2: "Dispositifs transversaux",
    3: "Fiscalitรฉ des entreprises",
    4: "Patrimoine et enregistrement",
    5: "Revenus particuliers",
    6: "Revenus patrimoniaux",
    7: "Taxes sur la consommation"
}

It achieves the following results on the evaluation set:
- Loss: 0.4734
- Accuracy: 0.9191

Link to the collection: louisbrulenaudet/lemone-router-671cce21d6410f3570514762