Lokesh-CODER
ยท
AI & ML interests
None yet
Recent Activity
replied to
Abhaykoul's
post
about 6 hours ago
Introducing Dhanishtha 2.0: World's first Intermediate Thinking Model
Dhanishtha 2.0 is the world's first LLM designed to think between the responses. Unlike other Reasoning LLMs, which think just once.
Dhanishtha can think, rethink, self-evaluate, and refine in between responses using multiple <think> blocks.
This technique makes it Hinghlt Token efficient it Uses up to 79% fewer tokens than DeepSeek R1
---
You can try our model from: https://helpingai.co/chat
Also, we're gonna Open-Source Dhanistha on July 1st.
---
For Devs:
๐ Get your API key at https://helpingai.co/dashboard
```
from HelpingAI import HAI # pip install HelpingAI==1.1.1
from rich import print
hai = HAI(api_key="hl-***********************")
response = hai.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "What is the value of โซ0โ๐ฅ3/๐ฅโ1๐๐ฅ ?"}],
stream=True,
hide_think=False # Hide or show models thinking
)
for chunk in response:
print(chunk.choices[0].delta.content, end="", flush=True)
```
reacted
to
Abhaykoul's
post
with ๐ฅ
about 6 hours ago
Introducing Dhanishtha 2.0: World's first Intermediate Thinking Model
Dhanishtha 2.0 is the world's first LLM designed to think between the responses. Unlike other Reasoning LLMs, which think just once.
Dhanishtha can think, rethink, self-evaluate, and refine in between responses using multiple <think> blocks.
This technique makes it Hinghlt Token efficient it Uses up to 79% fewer tokens than DeepSeek R1
---
You can try our model from: https://helpingai.co/chat
Also, we're gonna Open-Source Dhanistha on July 1st.
---
For Devs:
๐ Get your API key at https://helpingai.co/dashboard
```
from HelpingAI import HAI # pip install HelpingAI==1.1.1
from rich import print
hai = HAI(api_key="hl-***********************")
response = hai.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "What is the value of โซ0โ๐ฅ3/๐ฅโ1๐๐ฅ ?"}],
stream=True,
hide_think=False # Hide or show models thinking
)
for chunk in response:
print(chunk.choices[0].delta.content, end="", flush=True)
```
View all activity
Organizations