Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
Kseniase 
posted an update 1 day ago
Post
1588
10 new Chain-of-Thoughts (CoT) methods

CoT has long been one of the hottest techniques in AI thanks to its effectiveness and compelling core idea: encouraging models to solve complex problems through explicit intermediate reasoning steps. But usually researchers modify original CoT approach, finding tips that further improve LLMs' reasoning. That's what we're going to talk about today.

Here's a list of 10 latest enhanced CoT approaches:

1. Chain-of-Defensive-Thought -> Chain-of-Defensive-Thought: Structured Reasoning Elicits Robustness in Large Language Models against Reference Corruption (2504.20769)
Provides a few structured, defensive reasoning exemplars to improve the robustness of LLMs

2. Hybrid-CoT -> AdaR1: From Long-CoT to Hybrid-CoT via Bi-Level Adaptive Reasoning Optimization (2504.21659)
Proposes using Adaptive Hybrid Reasoning Model (AdaR1) that combines Long- and Short-CoT, and applying bi-level preference training to select effective reasoning styles

3. Semantic-level and token-level CoT -> T2I-R1: Reinforcing Image Generation with Collaborative Semantic-level and Token-level CoT (2505.00703)
Introduces T2I-R1 text-to-image gen model, that uses semantic-level CoT for prompt planning and token-level CoT for pixel-level generation, while BiCoT-GRPO coordinates them both

4. Speculative CoT (SCoT) -> Efficient Reasoning for LLMs through Speculative Chain-of-Thought (2504.19095)
SCoT drafts multiple reasoning paths with a lightweight draft, selects the best, and uses the target model for correction - all this to reduce latency by 48–66%

5. Collaborative CoT (Co-CoT) -> Co-CoT: A Prompt-Based Framework for Collaborative Chain-of-Thought Reasoning (2504.17091)
Breaks reasoning into blocks that users can inspect, modify and re-run, promoting active engagement. An adaptation mechanism aligns outputs with diverse cognitive styles and user goals

6. XS-CoT -> Enhancing Non-Core Language Instruction-Following in Speech LLMs via Semi-Implicit Cross-Lingual CoT Reasoning (2504.20835)
It's a cross-lingual framework that integrates speech-to-text translation into reasoning, using a semi-implicit CoT approach to compress intermediate tokens. This improves non-core language responses by up to 45%

Read further in the comments 👇

If you liked this, also subscribe to the Turing Post -> https://www.turingpost.com/subscribe
  1. CoT-RAG -> https://huggingface.co/papers/2504.13534
    Adds 3 new designs to CoT approach: 1) Knowledge Graph-driven CoT Generation to guide reasoning chains, 2) Learnable Knowledge Case-aware RAG for combining RAG with knowledge graphs to provide relevant sub-cases, and 3) Logic-based pseudo-program prompting execution.

  2. Unsupervised Visual CoT (UV-CoT) -> https://huggingface.co/papers/2504.18397
    Performs preference comparisons between model-generated bounding boxes. It generates and ranks model responses to visual regions, using this feedback to guide training to improve image-level reasoning.

  3. CoTAL -> https://huggingface.co/papers/2504.02323
    Combines CoT with active learning, using curriculum-aligned assessments, human-in-the-loop prompt design, and teacher/student feedback to improve automated grading. It boosts GPT-4’s accuracy by up to 24.5%.

  4. Deconstructing Long CoT (DLCoT) -> https://huggingface.co/papers/2503.16385
    Enhances distillation data by segmenting data, simplifying solutions, and optimizing of intermediate error states, improving model performance and token efficiency.

In this post