Victor Mustar PRO
AI & ML interests
Articles
Organizations
victor's activity
โข Free storage with generous limits๐
โข Dataset Viewer (Sorting, Filtering, FTS) ๐
โข Third Party Library Support
โข SQL Console ๐ง
โข Security ๐
โข Community, Reach, and Visibility ๐
It's a no brainer!
Check out our post on what you get instantly out of the box when you create a dataset.
https://huggingface.co/blog/researcher-dataset-sharing
Awesome I really like to compare TTS voices! contributing right now
Pendrokar/TTS-Spaces-Arena
Svngoku/maskgct-audio-lab
hexgrad/Kokoro-TTS
I chose @Svngoku 's forked HF space over amphion's due to the overly high ZeroGPU duration demand on the latter. 300s!
amphion/maskgct
Had to remove @mrfakename 's MetaVoice-1B Space from the available models as that space has been down for quite some time. ๐ค๏ธ
mrfakename/MetaVoice-1B-v0.1
I'm close to syncing the code to the original Arena's code structure. Then I'd like to use ASR in order to validate and create synthetic public datasets from the generated samples. And then make the Arena multilingual, which will surely attract quite the crowd!
More details ๐ https://huggingface.co/zh-ai-community
Code model:
โจQwen 2.5 coder by Alibaba Qwen
Qwen/qwen25-coder-66eaa22e6f99801bf65b0c2f
โจOpenCoder by InflyAI - Fully open code model๐
infly/opencoder-672cec44bbb86c39910fb55e
Image model:
โจHunyuan3D-1.0 by Tencent
tencent/Hunyuan3D-1
MLLM:
โจJanusFlow by DeepSeek
deepseek-ai/JanusFlow-1.3B
deepseek-ai/JanusFlow-1.3B
โจMono-InternVL-2B by OpenGVlab
OpenGVLab/Mono-InternVL-2B
Video model:
โจCogVideoX 1.5 by ChatGLM
THUDM/CogVideoX1.5-5B-SAT
Audio model:
โจFish Agent by FishAudio
fishaudio/fish-agent-v0.1-3b
Dataset:
โจOPI dataset by BAAIBeijing
BAAI/OPI
โ> fffiloni/DimensionX
Discuss Paper: DimensionX: Create Any 3D and 4D Scenes from a Single Image with Controllable Video Diffusion (2411.04928)
Examples by the amazing William Lamkin @phanes
I think we have a perfect match here ๐
Add an open source chat assistant on your website in 5 minutes: https://github.com/phospho-app/ai-chat-bubble
How does it work ?
- You give an URL
- The AI assistant crawls the website content and embed it
- Add it to your frontend in one line of code
- People on your website can ask the assistant questions
Powered by BAAI/bge-small-en-v1.5 and Mistral AI
The list covers three main categories:
1. Web Search with LLM summarization and follow-up capabilities
2. LLM chat interfaces with Web Search integration
3. Agent-driven research tools using LLM + Web Search
The timeline helps track the evolution of this space and serves as a reference for anyone looking for alternatives. If you know of any tools that should be included, please contribute by:
- opening a PR to edit the readme: https://github.com/felladrin/awesome-ai-web-search/edit/main/readme.md
- creating an issue in the repository: https://github.com/felladrin/awesome-ai-web-search/issues/new/choose
- or sharing in the comments below.
Some interesting stats:
Top 5 Authors by Total Impressions:
-----------------------------------
@merve : 171,783 impressions (68 posts)
@fdaudens : 135,253 impressions (81 posts)
@singhsidhukuldeep : 122,591 impressions (81 posts)
@akhaliq : 119,526 impressions (78 posts)
@MonsterMMORPG : 112,500 impressions (45 posts)
Top 5 Users by Number of Reactions Given:
----------------------------------------
@osanseviero : 1278 reactions
@clem : 910 reactions
@John6666 : 899 reactions
@victor : 674 reactions
@samusenps : 655 reactions
Top 5 Most Used Reactions:
-------------------------
โค๏ธ: 7048 times
๐ฅ: 5921 times
๐: 4856 times
๐: 2549 times
๐ค: 2065 times
When I am watching an organization's repos, notifications are sent out by default when there is PR or community activity for any of the repos, and I seem to have to mute notifications for each repo individually if I want to stop them.
We have updated https://huggingface.co/settings/notifications and it should be easier to unwatch repositories by batch. It now also possible to watch single repositories activity from the repo page. Please let us know if you have any feedback (cc @Sylvestre ).
๐ฅStranger Zone's : Super Realism [ strangerzonehf/Flux-Super-Realism-LoRA ]
Demo1 : prithivMLmods/FLUX-LoRA-DLC
Demo2 : prithivMLmods/FLUX-REALISM
Other New adapt.s for dev ๐ [ updated patch 2 ]
Hosted -> prithivMLmods/FLUX-LoRA-DLC
๐Digital Chaos: prithivMLmods/Digital-Chaos-Flux-LoRA
๐Threaded Knitted: prithivMLmods/Knitted-Character-Flux-LoRA
๐Fashion Hut: prithivMLmods/Fashion-Hut-Modeling-LoRA
๐Aura 9999: prithivMLmods/Aura-9999
๐Green Cartoon: prithivMLmods/Green-Cartoon-Flux-LoRA
๐Pastels: prithivMLmods/Pastel-BG-Flux-LoRA
๐Retro Pixel: prithivMLmods/Retro-Pixel-Flux-LoRA
๐CAnime: prithivMLmods/CAnime-LoRA
๐Pastels: prithivMLmods/Pastel-BG-Flux-LoRA
------------
๐LoRA Collection: prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be
๐LoRA Spaces: prithivMLmods/lora-space-collections-6714b72e0d49e1c97fbd6a32
๐Collection Zero:
prithivMLmods/collection-zero-and-demo-recently-updated-65e48a7dd8212873836ceca2
------------
.
.
.
@prithivMLmods ๐ค
It's 100% RNN and attention-free. MMLU 54.2% (previous world-v2.1 = 47.9%. note: without eval-boosting tricks such as annealing).
RWKV-7-world-v4 soon :)
LLMs don't have direct access to the internet because they work with the information they were trained on, which is usually a snapshot of data from before a certain date. However, it's possible to retrieve some information from the internet and put it into the context of the LLM. I invite you to look at how we do this in HuggingChat: https://github.com/huggingface/chat-ui/blob/main/src/lib/server/websearch/runWebSearch.ts
i have trained a Qwen 14b model on a smaller dataset, but its now very tricky because i have got nowhere to use it via inference (the paid for inference on hf costs quite a lot), does anyone know of anywhere where i can deploy my model and use it via api for a reasonable cost, or ideally none. thanks
Read more about the work at NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks (2410.20650)
Large Reasoning Models powered by Monte Carlo Tree Search (MCTS), Self-Play Reinforcement Learning, PPO, AlphaGo Zero's dua policy paradigm and Large Language Models!
https://github.com/SimpleBerry/LLaMA-O1/
What will happen when you compound MCTS โค LLM โค Self-Play โคRLHF?
Just a little bite of strawberry!๐
Past related works:
LLaMA-Berry: Pairwise Optimization for O1-like Olympiad-Level Mathematical Reasoning (2410.02884)
Accessing GPT-4 level Mathematical Olympiad Solutions via Monte Carlo Tree Self-refine with LLaMa-3 8B (2406.07394)