
AYDIN KULAN
AI & ML interests
Recent Activity
Organizations
OSS-CLINE

Thank you, it was very successful.


I find it more appropriate to research how to run a large model on small hardware, rather than how to run a small model on small hardware.

The Intermediate Thinking Model just leveled up again.
With sharper reasoning, better tool use, and expanded capabilities, Dhanishtha-2.0-preview-0825 is now live and ready to impress.
🧠 What Makes Dhanishtha Special?
Unlike typical CoT models that only thinks one time, Dhanishtha thinks iteratively:
> Think → Answer → Rethink → Improve → Rethink again if needed.
🔗 Try it now: HelpingAI/Dhanishtha-2.0-preview-0825
🔞 Dhanishtha NSFW Preview
For those exploring more expressive and immersive roleplay scenarios, we’re also releasing:
HelpingAI/Dhanishtha-nsfw
A specialized version tuned for adult-themed interactions and character-driven roleplay.
🔗 Explore it here: HelpingAI/Dhanishtha-nsfw
💬 You can also try all of these live at chat.helpingai.co

Please feel free to take a look. https://github.com/tarikkaya/aix

I’d like to mention one more thing: when I envision an AGI model, I’m imagining a design that merges these two images. I think a flat space would require a significant amount of computation to capture the necessary contexts. I also believe that a spherical model would facilitate easier access to different perspectives.

So, how do you plan to use an entity’s motor structures without breaking them down? Any model you design without leveraging these motor structures will be very weak. My suggestion is to use motor structures and a scale when necessary, applying different tones for different perspectives
It seems I’m not the only one who thinks the “What if…” series is a blueprint for AGI. Alternative realities make it possible to access different perspectives, and I’m even sure that combining them in this field will succeed. But the real issue is the feeding loop—i.e. the training loop. How do you make it both easy to train and ensure the neural connections stay solid? You can’t have both at once.
Have you given up on creating the model?

Did you build this model inside a sphere?

We need to address the problems of existing models seeing the big picture; moreover, this issue concerns not only VLMs but also LLMs. Imagine a map. If you look at it very closely, you can’t see the big picture. If you view it from a distance, you can’t see nearby objects.
ArcOffical/Chromos-AGI

What strategy do you foresee for training?


I definitely think that AI needs to effectively use novels, film scripts, and theater scripts to understand human emotions and thought structures. Not only to define emotions, but also because, in my opinion, we need these kinds of sources within thought frameworks inspired by human imagination and creativity. When we consider the datasets of today’s models, I actually think we’re providing very limited information. Remember that without mistakes there is no truth. This should serve as a fundamental basis for defining many expressions across many workflows.

VIDraft/SOMA-AGI
🎯 The First Step Toward AGI
SOMA (Self-Orchestrating Modular Architect) is a revolutionary architecture that fulfills the essential requirements for AGI (Artificial General Intelligence) Level 1. It perfectly implements the common AGI prerequisites emphasized by Yann LeCun (Meta), OpenAI, and Google DeepMind within a single LLM.
📋 AGI Level 1 Core Requirements = SOMA's Perfect Implementation ✅
🎯 Planning Capability
→ Supervisor AI autonomously designs and executes comprehensive analysis roadmaps
🧩 Role Differentiation & Modularity
→ A single LLM instantly differentiates into 5 expert AIs for collaboration
🔄 Self-reflection & Feedback Loops
→ Evaluator AI continuously validates and directs improvements
🛠️ Tool-use & Autonomy
→ Full automation from web search to report generation
🎮 Long-term Agency Structure
→ Completes complex 11-stage collaborative processes end-to-end
🔷 SOMA's Three Core Structures
🧭 Self-Orchestrating
The ability to define problems and distribute roles without external instructions is fundamental. This is the actual implementation of OpenAI's "Agentic AI" concept, with built-in real-time self-regulation mechanisms.
🧩 Modular
A single LLM internally creates multiple personas:
🎯 Planner = Supervisor AI establishes strategies
💡 Creator = Presents innovative solutions
📚 Analyzer = Collects and analyzes data
⚖️ Evaluator = Performs critical assessments
📊 Executor = Final synthesis and implementation
This perfectly realizes Meta AI's proposed "World Model + Planner + Memory + Actor" structure.
🧠 Architect
Capable of high-level thinking and problem structuring beyond simple execution. It actually implements the plan-adapt-multitask capabilities required by DeepMind's Gemini series, systematically decomposing and reconstructing complex problems.
💫 SOMA = The Embodiment of AGI Level 1