Pro Creations's picture

Pro Creations PRO

ProCreations

AI & ML interests

AGI and small scale-high quality AI

Recent Activity

Organizations

Hugging Face Discord Community's profile picture ProCreations Devoloper Team's profile picture

ProCreations's activity

posted an update about 7 hours ago
view post
Post
167
Question about Intellite Chat to you guys!
(If you don’t know, Intellite Chat is my up-and-coming 100m parameter AI model focused on high-quality chat.)

What quantization variants would you want to see for Intellite Chat? It’ll come with FP32, FP16, and BF16, but any others you want? Maybe FP8 or even BitNet’s 1.58-bit quantization? Let me know!
  • 1 reply
·
posted an update 1 day ago
view post
Post
1217
I made a space!

Check out
ProCreations/realtime-ai-visualization
This cool space visualizes a real neural net in real time. It trains a real 199 parameter model on XOR. With baby mode for non-devs and advanced mode for developers or enthusiasts, (hopefully) everyone will understand!
posted an update 3 days ago
view post
Post
1125
My first ever working model!
Introducing tinyvvision. An extremely small zero shot image classification model for simple shapes, letters, colors, and numbers. This is a foundation model for Intellision, an image-text-to-text model, which will be much bigger but still impressively small.
ProCreations/tinyvvision
NOTE: it’s small, mistakes will happen!!!! Not meant for complex stuff, just a demo for simple stuff.
replied to their post 4 days ago
view reply

the GPU’s just disappear and your desktop adjusts for CPU only so your precious compute power for AI is gone 😈 lol

posted an update 4 days ago
view post
Post
1591
WOULD YOU RATHER
part 2

All your GPUs you have suddenly vanish and any devices with a GPU turn to CPU only

Every time you use a HF space you randomly start dancing for 5 minutes
OR
Every HF model you’ve hearted / liked suddenly disappears and becomes inaccessible
·
replied to their post 5 days ago
view reply

sorry about that lol It was late and I wanted to make a post before sleep so I didnt really think about the choices

replied to their post 5 days ago
view reply

Makes sense as LLM's arent very random, this dataset isnt really for making LLMs random because you just cant (yet) more as just a dump of info I dropped for someone to find a use case for. Hey, thanks for commenting!

posted an update 5 days ago
view post
Post
1944
WOULD YOU RATHER

Instantly have 10 of the best GPUs in the world but you can only use/train/tune AI on it

Instantly own all of LLaMA 4 models but your GPU is 10 years old

Or do a backflip every time your AI has an error training (if you are a developer)
·
posted an update 6 days ago
view post
Post
2214
New dataset- what a random time, right?

Randomness… pure randomness feels refreshing to me so I made a dataset of randomness.

ProCreations/quantum-randomness

This is a dataset of 1000 entries of timestamps and real quantum randomness. I have no clue what it could be used for but it exists now
  • 4 replies
·
replied to their post 7 days ago
view reply

Very cool! Thanks for going out of your way to help, I saved the photo!

posted an update 7 days ago
view post
Post
3492
What do you think of Intellite’s new icons/logo? Let us know!

Also Intellite chat technically does work! But we decided to scale it up a bit (same parameter count at 100m, but we went from trained on 4b tokens to 200b tokens, big upgrade!) for max quality.
·
reacted to m-ric's post with 🤗 8 days ago
view post
Post
4245
I've made an open version of Google's NotebookLM, and it shows the superiority of the open source tech task! 💪

The app's workflow is simple. Given a source PDF or URL, it extracts the content from it, then tasks Meta's Llama 3.3-70B with writing the podcast script, with a good prompt crafted by @gabrielchua ("two hosts, with lively discussion, fun notes, insightful question etc.")
Then it hands off the text-to-speech conversion to Kokoro-82M, and there you go, you have two hosts discussion any article.

The generation is nearly instant, because:
> Llama 3.3 70B is running at 1,000 tokens/seconds with Cerebras inference
> The audio is generated in streaming mode by the tiny (yet powerful) Kokoro, generating voices faster than real-time.

And the audio generation runs for free on Zero GPUs, hosted by HF on H200s.

Overall, open source solutions rival the quality of closed-source solutions at close to no cost!

Try it here 👉👉 m-ric/open-notebooklm
·
replied to their post 9 days ago
view reply

Thank you! I'm glad your exited.

For quantization, I probably will offer different scales. Default won't be quantized to make it easier, but there will be INT16, INT8, and INT4 intellite models available.

posted an update 9 days ago
view post
Post
2421
Post of the Day – Your Thoughts, Our Take

Yesterday we asked:
If AI could master just one thing, what should it be?
And the responses? Insightful, creative, and genuinely thought-provoking.

Here’s a few that stood out:

🍼 @NandaKrishvaa said “Curiosity like a baby.”
Instead of just answering questions, an AI that asks them with childlike wonder? That’s a whole new kind of intelligence.

@MrDevolver suggested “Master being Jack of All Trades.”
Sure, it bends the rules a bit — but adaptability is key. Sometimes breadth can outshine depth.

@afranco50 argued for “Perfect logic,” saying it could unlock all other abilities.
It’s a solid point: if an AI can reason flawlessly, it may just learn to improve everything else on its own.



Our take?
We still believe the biggest leap forward is flawless conversation — not just accurate, but deeply human. Emotional intelligence, nuance, humor, empathy. That kind of interaction is what makes AI feel real.

It’s also why we’re building IntellIte Chat to focus on that exact skillset:
• Emotion-aware replies
• Natural, flowing conversation
• Strong command of casual and expressive English

When it releases, it won’t just talk — it’ll connect. And in a world full of tools, we think the future needs more companions.
What do you think? Let us know! If we get more comments, might as well do another post on this tomorrow lol.
  • 2 replies
·
posted an update 10 days ago
view post
Post
797
Post of the Day

If AI could master just one thing, what should it be?

Human-level conversation that actually feels real?
Flawless, bug-free code?
Perfect math and logic every single time?

Whatever you pick, just know the AI will not be so good at the other topics! No picking "all of them" either lol
What do you think matters most for AI to truly level up?
Drop your thoughts in the comments — we’ll share our answer (and maybe a few of yours 👇) in the next post.
·
posted an update 12 days ago
view post
Post
710
🚨 NEW DATASET ALERT 🚨

Come check out
ProCreations/black-hole-sim-randomized
a high-fidelity dataset with 400,000+ randomized black hole simulations — packed with relativistic metrics, Kerr geometry, and GR weirdness to help AIs actually understand physics.

🕳️ Teach your model:
• Time dilation
• Redshift
• Orbital dynamics
• Frame dragging
• Full Kerr tensors
…and more, all in raw JSONL!

This release celebrates SimpleMath hitting 200 downloads — thank you all so much for the support! 🙌
posted an update 13 days ago
view post
Post
3096
Intellite Chat training script is working!

Training works fine, normal loss, no more gradient explosion or gradient vanishing, etc.
BUT, before I officially flip the switch and turn on training, I wanna make sure its the best possible 100m parameter model it can be, so I am working a bit more (probably an extra 3-5 days) to add even more innovative AI improvements to intellite.
posted an update 15 days ago
view post
Post
708
🧠 Post of the Day: Quantum AI – Your Thoughts + Our Take

Yesterday we asked: “What will quantum computing do to AI?”
Big thanks to solongeran for this poetic insight:

“Quantum computers are hard to run error-free. But once they’re reliable, AI will be there. Safer than the daily sunset. Shure – no more queues ;)”

🚀 Our Take – What Quantum Computing Will Do to AI (by 2035)

By the time scalable, fault-tolerant quantum computers arrive, AI won’t just run faster — it’ll evolve in ways we’ve never seen:



🔹 1. Huge Speedups in Optimization & Search
Why: Quantum algorithms like Grover’s can cut down search and optimization times exponentially in some cases.
How: They’ll power up tasks like hyperparameter tuning, decision-making in RL, and neural architecture search — crunching what now takes hours into seconds.



🔹 2. Quantum Neural Networks (QNNs)
Why: QNNs can represent complex relationships more efficiently than classical nets.
How: They use entanglement and superposition to model rich feature spaces, especially useful for messy or high-dimensional data — think drug discovery, finance, or even language structure.



🔹 3. Autonomous Scientific Discovery
Why: Quantum AI could simulate molecular systems that are impossible for classical computers.
How: By combining quantum simulation with AI exploration, we may unlock ultra-fast pathways to new drugs, materials, and technologies — replacing years of lab work with minutes of computation.



🔹 4. Self-Evolving AI Architectures
Why: Future AI systems will design themselves.
How: Quantum processors will explore massive spaces of model variants in parallel, enabling AI to simulate, compare, and evolve new architectures — fast, efficient, and with little trial-and-error.



⚛️ The Takeaway:
Quantum computing won’t just speed up AI. It’ll open doors to new types of intelligence — ones that learn, discover, and evolve far beyond today’s limits.
posted an update 16 days ago
view post
Post
1369
Quantum Computing + AI = 🤯?
What do you think quantum computing will do to AI?
Will it revolutionize training speed? Unlock whole new algorithms? Or maybe… just complicate things?

💬 Drop your thoughts below — we’ll share our take and highlight some of your replies in tomorrow’s post!
  • 3 replies
·
reacted to merterbak's post with 🔥 16 days ago
view post
Post
1684
Microsoft released their new fine-tuned phi-4 models with reasoning data yesterday. They outperform/rival much larger models . Check out them if you haven't yet. 🚀

Phi4 mini reasoning(SFT): microsoft/Phi-4-mini-reasoning
Phi-4 reasoning(SFT): microsoft/Phi-4-reasoning
Phi-4 reasoning plus (SFT + RL): microsoft/Phi-4-reasoning-plus
Demo: https://github.com/marketplace/models/azureml/Phi-4-reasoning/playground
Articles: https://arxiv.org/pdf/2504.21318
https://arxiv.org/pdf/2504.21233
Blog: https://azure.microsoft.com/en-us/blog/one-year-of-phi-small-language-models-making-big-leaps-in-ai/

  • 1 reply
·