raahul thakur's picture

raahul thakur PRO

Raahulthakur
ยท

AI & ML interests

Data enthusiast pursuing a Master's in Astrophysics. Passionate about deep learning models and their applications in solving complex problems.

Recent Activity

updated a Space about 12 hours ago
Raahulthakur/FinsightX
reacted to Kseniase's post with ๐Ÿ”ฅ about 12 hours ago
10 new Chain-of-Thoughts (CoT) methods CoT has long been one of the hottest techniques in AI thanks to its effectiveness and compelling core idea: encouraging models to solve complex problems through explicit intermediate reasoning steps. But usually researchers modify original CoT approach, finding tips that further improve LLMs' reasoning. That's what we're going to talk about today. Here's a list of 10 latest enhanced CoT approaches: 1. Chain-of-Defensive-Thought -> https://huggingface.co/papers/2504.20769 Provides a few structured, defensive reasoning exemplars to improve the robustness of LLMs 2. Hybrid-CoT -> https://huggingface.co/papers/2504.21659 Proposes using Adaptive Hybrid Reasoning Model (AdaR1) that combines Long- and Short-CoT, and applying bi-level preference training to select effective reasoning styles 3. Semantic-level and token-level CoT -> https://huggingface.co/papers/2505.00703 Introduces T2I-R1 text-to-image gen model, that uses semantic-level CoT for prompt planning and token-level CoT for pixel-level generation, while BiCoT-GRPO coordinates them both 4. Speculative CoT (SCoT) -> https://huggingface.co/papers/2504.19095 SCoT drafts multiple reasoning paths with a lightweight draft, selects the best, and uses the target model for correction - all this to reduce latency by 48โ€“66% 5. Collaborative CoT (Co-CoT) -> https://huggingface.co/papers/2504.17091 Breaks reasoning into blocks that users can inspect, modify and re-run, promoting active engagement. An adaptation mechanism aligns outputs with diverse cognitive styles and user goals 6. XS-CoT -> https://huggingface.co/papers/2504.20835 It's a cross-lingual framework that integrates speech-to-text translation into reasoning, using a semi-implicit CoT approach to compress intermediate tokens. This improves non-core language responses by up to 45% Read further in the comments ๐Ÿ‘‡ If you liked this, also subscribe to the Turing Post -> https://www.turingpost.com/subscribe
View all activity

Organizations

Linguana AI's profile picture

Posts 1

view post
Post
1148
FinSightX: Your AI Financial Co-Pilot
FinSightX is a multi-agent financial assistant powered by language models. Designed for analysts, investors, and fintech developers, it combines insights from multiple domains into a single, sleek Streamlit interface.

Features
Equity Analyst Agent โ†’ Ask questions about stocks, indicators, performance.

Macro Strategist Agent โ†’ Get macroeconomic insights using language models.
News Summarizer Agent โ†’ Summarizes market headlines instantly.
Quant Backtester Agent โ†’ Run basic backtests using bt.
Regulatory Radar Agent โ†’ Monitor policy shifts and alerts.
Client Advisor Agent โ†’ Assist with client queries or hypothetical portfolios.

Tech Stack
transformers, sentence-transformers
torch, scikit-learn, neuralprophet
bt for strategy backtesting
chromadb for vector storage
Streamlit + FastAPI for UI/backend

Developed and maintained by @Raahul-Thakur
Live Space: Raahulthakur/FinsightX

Built using open-source tools and financial domain knowledge. Contributions, feedback, and forks welcome!

datasets 0

None public yet