Ksenia Se

Kseniase

AI & ML interests

None yet

Recent Activity

reacted to their post with πŸ‘ 1 day ago
16 new research on inference-time scaling: For the last couple of weeks a large amount of studies on inference-time scaling has emerged. And it's so cool, because each new paper adds a trick to the toolbox, making LLMs more capable without needing to scale parameter count of the models. So here are 13 new methods + 3 comprehensive studies on test-time scaling: 1. https://huggingface.co/papers/2504.02495 Probably, the most popular study. It proposes to boost inference-time scalability by improving reward modeling. To enhance performance, DeepSeek-GRM uses adaptive critiques, parallel sampling, pointwise generative RM, and Self-Principled Critique Tuning (SPCT) 2. https://huggingface.co/papers/2504.04718 Allows small models to use external tools, like code interpreters and calculator, to enhance self-verification 3. https://huggingface.co/papers/2504.00810 Proposes to train LLMs on code-based reasoning paths to make test-time scaling more efficient, limiting unnecessary tokens with a special dataset and a Shifted Thinking Window 4. https://huggingface.co/papers/2504.00891 Introduces GenPRM, a generative PRM, that uses CoT reasoning and code verification for step-by-step judgment. With only 23K training examples, GenPRM outperforms prior PRMs and larger models 5. https://huggingface.co/papers/2503.24320 SWIFT test-time scaling framework improves World Models' performance without retraining, using strategies like fast tokenization, Top-K pruning, and efficient beam search 6. https://huggingface.co/papers/2504.07104 Proposes REBEL for RAG systems scaling, which uses multi-criteria optimization with CoT prompting for better performance-speed tradeoffs as inference compute increases 7. https://huggingface.co/papers/2503.13288 Proposes a Ο†-Decoding strategy that uses foresight sampling, clustering and adaptive pruning to estimate and select optimal reasoning steps Read further below πŸ‘‡ Also, subscribe to the Turing Post https://www.turingpost.com/subscribe
replied to their post 3 days ago
16 new research on inference-time scaling: For the last couple of weeks a large amount of studies on inference-time scaling has emerged. And it's so cool, because each new paper adds a trick to the toolbox, making LLMs more capable without needing to scale parameter count of the models. So here are 13 new methods + 3 comprehensive studies on test-time scaling: 1. https://huggingface.co/papers/2504.02495 Probably, the most popular study. It proposes to boost inference-time scalability by improving reward modeling. To enhance performance, DeepSeek-GRM uses adaptive critiques, parallel sampling, pointwise generative RM, and Self-Principled Critique Tuning (SPCT) 2. https://huggingface.co/papers/2504.04718 Allows small models to use external tools, like code interpreters and calculator, to enhance self-verification 3. https://huggingface.co/papers/2504.00810 Proposes to train LLMs on code-based reasoning paths to make test-time scaling more efficient, limiting unnecessary tokens with a special dataset and a Shifted Thinking Window 4. https://huggingface.co/papers/2504.00891 Introduces GenPRM, a generative PRM, that uses CoT reasoning and code verification for step-by-step judgment. With only 23K training examples, GenPRM outperforms prior PRMs and larger models 5. https://huggingface.co/papers/2503.24320 SWIFT test-time scaling framework improves World Models' performance without retraining, using strategies like fast tokenization, Top-K pruning, and efficient beam search 6. https://huggingface.co/papers/2504.07104 Proposes REBEL for RAG systems scaling, which uses multi-criteria optimization with CoT prompting for better performance-speed tradeoffs as inference compute increases 7. https://huggingface.co/papers/2503.13288 Proposes a Ο†-Decoding strategy that uses foresight sampling, clustering and adaptive pruning to estimate and select optimal reasoning steps Read further below πŸ‘‡ Also, subscribe to the Turing Post https://www.turingpost.com/subscribe
View all activity

Organizations

Turing Post's profile picture Journalists on Hugging Face's profile picture Social Post Explorers's profile picture Hugging Face Discord Community's profile picture Sandbox's profile picture

Posts 18

view post
Post
5037
16 new research on inference-time scaling:

For the last couple of weeks a large amount of studies on inference-time scaling has emerged. And it's so cool, because each new paper adds a trick to the toolbox, making LLMs more capable without needing to scale parameter count of the models.

So here are 13 new methods + 3 comprehensive studies on test-time scaling:

1. Inference-Time Scaling for Generalist Reward Modeling (2504.02495)
Probably, the most popular study. It proposes to boost inference-time scalability by improving reward modeling. To enhance performance, DeepSeek-GRM uses adaptive critiques, parallel sampling, pointwise generative RM, and Self-Principled Critique Tuning (SPCT)

2. T1: Tool-integrated Self-verification for Test-time Compute Scaling in Small Language Models (2504.04718)
Allows small models to use external tools, like code interpreters and calculator, to enhance self-verification

3. Z1: Efficient Test-time Scaling with Code (2504.00810)
Proposes to train LLMs on code-based reasoning paths to make test-time scaling more efficient, limiting unnecessary tokens with a special dataset and a Shifted Thinking Window

4. GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning (2504.00891)
Introduces GenPRM, a generative PRM, that uses CoT reasoning and code verification for step-by-step judgment. With only 23K training examples, GenPRM outperforms prior PRMs and larger models

5. Can Test-Time Scaling Improve World Foundation Model? (2503.24320)
SWIFT test-time scaling framework improves World Models' performance without retraining, using strategies like fast tokenization, Top-K pruning, and efficient beam search

6. Relevance Isn't All You Need: Scaling RAG Systems With Inference-Time Compute Via Multi-Criteria Reranking (2504.07104)
Proposes REBEL for RAG systems scaling, which uses multi-criteria optimization with CoT prompting for better performance-speed tradeoffs as inference compute increases

7. $Ο†$-Decoding: Adaptive Foresight Sampling for Balanced Inference-Time Exploration and Exploitation (2503.13288)
Proposes a Ο†-Decoding strategy that uses foresight sampling, clustering and adaptive pruning to estimate and select optimal reasoning steps

Read further below πŸ‘‡

Also, subscribe to the Turing Post https://www.turingpost.com/subscribe

Articles 39

Article
14

Topic 33: Slim Attention, KArAt, XAttention and Multi-Token Attention Explained – What’s Really Changing in Transformers?

models

None public yet

datasets

None public yet