AI & ML interests

None defined yet.

AbhaykoulΒ 
posted an update about 15 hours ago
view post
Post
112
πŸš€ Ever dreamed of training your own Large Language Model from scratch? What if I told you it doesn't require a supercomputer or PhD in ML? 🀯

Introducing LLM Trainer - the educational framework that makes LLM training accessible to EVERYONE! Whether you're on a CPU-only laptop or scaling to distributed GPUs, we've got you covered. πŸ’»βž‘οΈπŸ–₯️

Why LLM Trainer? Because existing tools are either too simplistic (hiding the magic) or too complex (requiring expert knowledge). We bridge the gap with:

πŸŽ“ Educational transparency - every component built from scratch with clear code
πŸ’» CPU-first approach - start training immediately, no GPU needed
πŸ”§ Full customization - modify anything you want
πŸ“ˆ Seamless scaling - from laptop to cluster without code changes
🀝 HuggingFace integration - works with existing models & tokenizers

Key highlights:
βœ… Built-in tokenizers (BPE, WordPiece, HF wrappers)
βœ… Complete Transformer implementation from scratch
βœ… Optimized for CPU training
βœ… Advanced features: mixed precision, gradient checkpointing, multiple generation strategies
βœ… Comprehensive monitoring & metrics

Perfect for:
- Students learning transformers
- Researchers prototyping new ideas
- Developers building domain-specific models

Ready to train your first LLM? It's easier than you think!

πŸ”— Check it out: https://github.com/HelpingAI/llm-trainer
πŸ“š Docs: Getting Started Guide
πŸ’¬ Join the community: GitHub Discussions

#AI #MachineLearning #LLM #DeepLearning #OpenSource #Python #HuggingFace #NLP

Special thanks to HuggingFace and PyTorch teams for the amazing ecosystem! πŸ™
NymboΒ 
posted an update 3 days ago
view post
Post
397
I have a few updates to my MCP server I wanna share: New Memory tool, improvements to web search & speech generation.

# Memory_Manager Tool

We now have a Memory_Manager tool. Ask ChatGPT to write all its memories verbatim, then tell gpt-oss-20b to save each one using the tool, then take them anywhere! It stores memories in a memories.json file in the repo, no external database required.

The Memory_Manager tool is currently hidden from the HF space because it's intended for local use. It's enabled by providing a HF_READ_TOKEN in the env secrets, although it doesn't actually use the key for anything. There's probably a cleaner way of ensuring memory is only used locally, I'll come back to this.

# Fetch & Websearch

The Fetch_Webpage tool has been simplified a lot. It now converts the page to Markdown and returns the page with three length settings (Brief, Standard, Full). This is a lot more reliable than the old custom extraction method.

The Search_DuckDuckGo tool has a few small improvements. The input is easier for small models to get right, and the output is more readable.

# Speech Generation

I've added the remaining voices for Kokoro-82M, it now supports all 54 voices with all accents/languages.

I also removed the 30 second cap by making sure it computes all chunks in sequence, not just the first. I've tested it on outputs that are ~10 minutes long. Do note that when used as an MCP server, the tool will timeout after 1 minute, nothing I can do about that for right now.

# Other Thoughts

Lots of MCP use cases involve manipulating media (image editing, ASR, etc.). I've avoided adding tools like this so far for two reasons:

1. Most of these solutions would require assigning it a ZeroGPU slot.
2. The current process of uploading files like images to a Gradio space is still a bit rough. It's doable but requires additional tools.

Both of these points make it a bit painful for local usage. I'm open to suggestions for other tools that rely on text.
NymboΒ 
posted an update 16 days ago
view post
Post
797
I built a general use MCP space ~ Fetch webpages, DuckDuckGo search, Python code execution, Kokoro TTS, Image Gen, Video Gen.

# Tools

1. Fetch webpage
2. Web search via DuckDuckGo (very concise, low excess context)
3. Python code executor
4. Kokoro-82M speech generation
5. Image Generation (use any model from HF Inference Providers)
6. Video Generation (use any model from HF Inference Providers)

The first four tools can be used without any API keys whatsoever. DDG search is free and the code execution and speech gen is done on CPU. Having a HF_READ_TOKEN in the env variables will show all tools. If there isn't a key present, The Image/Video Gen tools are hidden.

Nymbo/Tools
NymboΒ 
posted an update 24 days ago
view post
Post
975
Anyone using Jan-v1-4B for local MCP-based web search, I highly recommend you try out Intelligent-Internet/II-Search-4B

Very impressed with this lil guy and it deserves more downloads. It's based on the original version of Qwen3-4B but find that it questions reality way less often. Jan-v1 seems to think that everything it sees is synthetic data and constantly gaslights me
KingNishΒ 
posted an update about 1 month ago
VokturzΒ 
posted an update about 1 month ago
view post
Post
402
πŸ€— A Transformers.js Playground

I just created a new spaces to test models directly in the browser using Transformers.js. You can pick any model from a list (mainly from the onnx-community), or just load the one you desire :) It also allows you to choose among different quantizations!

Currently the following pipelines are supported
- feature-extraction
- image-classification
- text-generation
- text-classification
- text-to-speech
- zero-shot-classification

Vokturz/transformers-js-playground
AbhaykoulΒ 
posted an update about 1 month ago
view post
Post
3980
πŸš€ Dhanishtha-2.0-preview-0825 Is Here

The Intermediate Thinking Model just leveled up again.

With sharper reasoning, better tool use, and expanded capabilities, Dhanishtha-2.0-preview-0825 is now live and ready to impress.

🧠 What Makes Dhanishtha Special?
Unlike typical CoT models that only thinks one time, Dhanishtha thinks iteratively:

> Think β†’ Answer β†’ Rethink β†’ Improve β†’ Rethink again if needed.

πŸ”— Try it now: HelpingAI/Dhanishtha-2.0-preview-0825

πŸ”ž Dhanishtha NSFW Preview

For those exploring more expressive and immersive roleplay scenarios, we’re also releasing:

HelpingAI/Dhanishtha-nsfw
A specialized version tuned for adult-themed interactions and character-driven roleplay.

πŸ”— Explore it here: HelpingAI/Dhanishtha-nsfw

πŸ’¬ You can also try all of these live at chat.helpingai.co
Β·
AtAndDevΒ 
posted an update about 2 months ago
view post
Post
467
Qwen 3 Coder is a personal attack to k2, and I love it.
It achieves near SOTA on LCB while not having reasoning.
Finally people are understanding that reasoning isnt necessary for high benches...

Qwen ftw!

DECENTRALIZE DECENTRALIZE DECENTRALIZE
AbhaykoulΒ 
posted an update about 2 months ago
view post
Post
3083
πŸŽ‰ Dhanishtha-2.0-preview-0725 is Now Live

The Intermediate Thinking Model just got even better.
With the new update, Dhanishtha is now sharper, smarter, and trained further on tool use

🧠 What Makes Dhanishtha Different?
Unlike standard COT models that give one-shot responses, Dhanishtha thinks in layers:

> Think β†’ Answer β†’ Rethink β†’ Improve β†’ Rethink again if needed.

HelpingAI/Dhanishtha-2.0-preview-0725
AbhaykoulΒ 
posted an update 2 months ago
view post
Post
3043
πŸŽ‰ Dhanishtha 2.0 Preview is Now Open Source!

The world's first Intermediate Thinking Model is now available to everyone!

Dhanishtha 2.0 Preview brings revolutionary intermediate thinking capabilities to the open-source community. Unlike traditional reasoning models that think once, Dhanishtha can think, answer, rethink, answer again, and continue rethinking as needed using multiple blocks between responses.

πŸš€ Key Features
- Intermediate thinking: Think β†’ Answer β†’ Rethink β†’ Answer β†’ Rethink if needed...
- Token efficient: Uses up to 79% fewer tokens than DeepSeek R1 on similar queries
- Transparent thinking: See the model's reasoning process in real-time
- Open source: Freely available for research and development


HelpingAI/Dhanishtha-2.0-preview
https://helpingai.co/chat
  • 1 reply
Β·
NymboΒ 
posted an update 2 months ago
view post
Post
2834
Anyone know how to reset Claude web's MCP config? I connected mine when the HF MCP first released with just the default example spaces added. I added lots of other MCP spaces but Claude.ai doesn't update the available tools... "Disconnecting" the HF integration does nothing, deleting it and adding it again does nothing.

Refreshing tools works fine in VS Code because I can manually restart it in mcp.json, but claude.ai has no such option. Anyone got any ideas?
Β·
AbhaykoulΒ 
posted an update 3 months ago
view post
Post
4526
Introducing Dhanishtha 2.0: World's first Intermediate Thinking Model

Dhanishtha 2.0 is the world's first LLM designed to think between the responses. Unlike other Reasoning LLMs, which think just once.

Dhanishtha can think, rethink, self-evaluate, and refine in between responses using multiple <think> blocks.
This technique makes it Hinghlt Token efficient it Uses up to 79% fewer tokens than DeepSeek R1
---

You can try our model from: https://helpingai.co/chat
Also, we're gonna Open-Source Dhanistha on July 1st.

---
For Devs:
πŸ”‘ Get your API key at https://helpingai.co/dashboard
from HelpingAI import HAI  # pip install HelpingAI==1.1.1
from rich import print

hai = HAI(api_key="hl-***********************")

response = hai.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[{"role": "user", "content": "What is the value of ∫0∞π‘₯3/π‘₯βˆ’1𝑑π‘₯ ?"}],
    stream=True,
    hide_think=False # Hide or show models thinking
)

for chunk in response:
    print(chunk.choices[0].delta.content, end="", flush=True)
  • 2 replies
Β·
KingNishΒ 
posted an update 3 months ago
view post
Post
1108
What's currently the biggest gap in Open Source Datasets ??
Β·
AtAndDevΒ 
posted an update 3 months ago
view post
Post
3004
deepseek-ai/DeepSeek-R1-0528

This is the end
  • 1 reply
Β·
NymboΒ 
posted an update 4 months ago
view post
Post
4098
Haven't seen this posted anywhere - Llama-3.3-8B-Instruct is available on the new Llama API. Is this a new model or did someone mislabel Llama-3.1-8B?
  • 1 reply
Β·
NymboΒ 
posted an update 4 months ago
view post
Post
2771
PSA for anyone using Nymbo/Nymbo_Theme or Nymbo/Nymbo_Theme_5 in a Gradio space ~

Both of these themes have been updated to fix some of the long-standing inconsistencies ever since the transition to Gradio v5. Textboxes are no longer bright green and in-line code is readable now! Both themes are now visually identical across versions.

If your space is already using one of these themes, you just need to restart your space to get the latest version. No code changes needed.
gospacedevΒ 
updated a Space 5 months ago
AtAndDevΒ 
posted an update 5 months ago
view post
Post
3134
Llama 4 is out...
Β·
AtAndDevΒ 
posted an update 6 months ago
view post
Post
4361
There seems to multiple paid apps shared here that are based on models on hf, but some ppl sell their wrappers as "products" and promote them here. For a long time, hf was the best and only platform to do oss model stuff but with the recent AI website builders anyone can create a product (really crappy ones btw) and try to sell it with no contribution to oss stuff. Please dont do this, or try finetuning the models you use...
Sorry for filling yall feed with this bs but yk...
  • 6 replies
Β·
AtAndDevΒ 
posted an update 6 months ago
view post
Post
1665
Gemma 3 seems to be really good at human preference. Just waiting for ppl to see it.