π Just published: "OpenEvolve: Open-Source Evolutionary Code Optimization with Real-World GPU Kernel Discovery"
We built the first open-source implementation of Google's AlphaEvolve system and used it to automatically discover GPU kernel optimizations that outperform human engineers!
Key results:
- 21.8% average decode speed improvement on Apple Silicon - 36.7% improvement on long-context transformer attention - Discovered novel vectorization patterns and 2-pass softmax algorithm
The system evolved a Metal kernel for Qwen3's Grouped Query Attention from a basic 3-pass implementation into something with sophisticated Apple Silicon optimizations that would take experts months to discover manually. The evolved kernel automatically found the optimal vec<T,8> operations for 128-dim attention heads and fused softmax computation with value accumulation.
Really excited about the potential here - imagine evolutionary algorithms automatically discovering optimizations across all our AI infrastructure. What would you want to optimize with this approach?
Adaptive Classifier: Dynamic Text Classification with Strategic Learning
New text classification system that learns continuously without catastrophic forgetting. Achieved 22.2% robustness improvement on adversarial datasets while maintaining clean data performance.
π― THE PROBLEM Traditional classifiers require complete retraining when adding new classes. Expensive and time-consuming, especially with adversarial users trying to game the system.
π KEY INNOVATIONS β’ Hybrid memory-neural architecture (prototype-based + neural adaptation) β’ Strategic classification using game theory to predict and defend against manipulation β’ Elastic Weight Consolidation prevents catastrophic forgetting
π RESULTS Tested on AI-Secure/adv_glue dataset: β’ Clean data: 80.0% β 82.2% (+2.2%) β’ Manipulated data: 60.0% β 82.2% (+22.2%) β’ Zero performance drop under adversarial attacks
DeepThink Plugin: Bringing Gemini 2.5's Parallel Reasoning to Open Models
Just released an open-source plugin that implements Google's "Deep Think" reasoning approach for models like DeepSeek R1, Qwen3, and other open models.
Google's recent Gemini 2.5 report introduced Deep Think - a technique where models generate multiple hypotheses in parallel and critique them before arriving at final answers. It achieves SOTA results on math olympiads and competitive coding benchmarks.
Our implementation works by modifying the inference pipeline to explore multiple solution paths simultaneously, then synthesizing the best approach. Instead of single-pass generation, models run an internal debate before responding.
Key features: - Works with any model that supports structured reasoning patterns - Implements parallel thinking during response generation - Particularly effective for complex reasoning tasks, math, and coding problems - Increases inference time but significantly improves answer quality
The plugin won the Cerebras & OpenRouter Qwen 3 Hackathon, validating that this approach works well beyond Google's proprietary implementation.
The goal is democratizing advanced reasoning capabilities that were previously locked behind APIs. Perfect for researchers and practitioners working with local deployments who want enhanced reasoning without dependency on proprietary services.
Performance notes: Currently about 2-3x slower inference but much better results on complex problems. Working on adaptive triggering to only activate when problems benefit from parallel reasoning.
Would love feedback from the HF community and collaborations on optimizing the approach further. Open to PRs and always interested in making open models more capable.
New Research: Theoretical Foundations for In-Context Learning in Transformers
I'm excited to share our latest theoretical work that formally proves an interesting property of large language models: base transformer models can approximate fine-tuned capabilities using only inference-time techniques like in-context learning.
The core question we investigated: Can specialized behaviors typically acquired through expensive supervised fine-tuning be elicited from base models without any parameter updates?
Our theoretical contribution: We provide a formal proof, grounded in the Turing completeness of transformers, showing that this is indeed possible under certain assumptions. The work establishes mathematical bounds on the minimal dataset sizes needed for approximation.
Key theoretical results:
- For text generation tasks: O(mV/Ρ²) examples suffice (where m = number of contexts, V = vocabulary size, Ρ = error tolerance) - For linear classification: O(d/Ρ) examples (where d = input dimension) - Extensions to finite context scenarios with practical bounds
This work helps explain why techniques like few-shot prompting, retrieval-augmented generation, and in-context learning work so effectively in practice. It bridges formal computer science theory with empirical observations about modern language models.
While the assumptions are idealized (unbounded computational resources, full dataset access), the results provide mathematical foundations for understanding inference-time adaptation strategies that are increasingly important in AI deployment.
Inspired by Hugging Face's official MCP server, I've developed a complementary tool that exposes my semantic search API to enhance discovery across the HF platform.
Key capabilities:
- AI-powered semantic search for models and datasets - Parameter count analysis via safetensors metadata - Trending content discovery - Find similar models/datasets functionality - 11 tools total for enhanced ecosystem navigation
The semantic search goes beyond simple keyword matching, understanding context and relationships between different models and datasets.
Example query: "Find around 10 reasoning Hugging Face datasets published in 2025 focusing on topics other than maths and science. Show a link and a short summary for each dataset." (results in video!)
So every bio/med/chem meeting i go to i always the same questions "why are you sharing a gdrive link with me for this?" and "Do you have any plans to publish your model weights and datasets on huggingface?" and finally i got a good answer today which explains everything :
basically there is some kind of government censorship on this (usa, but i'm sure others too) and they are told they are not allowed as it is considered a "dataleak" which is illegal !!!!
this is terrible ! but the good news is that we can do something about it !
A series of personal finance advisor models that try to resolve the queries by trying to understand the personβs psychological state and relevant context.
These are still prototypes that have much room for improvement.
Whatβs included in this release: - Akhil-Theerthala/Kuvera-8B-v0.1.0: Qwen3-8B, meticulously fine-tuned on approximately 20,000 personal-finance inquiries. - Akhil-Theerthala/Kuvera-14B-v0.1.0: LoRA on DeepSeek-R1-Distill-Qwen-14B, honed through training on about 10,000 chain-of-thought queries.
For those interested, the models and datasets are accessible for free (links in the comments). If you are curious about the upcoming version's roadmap, letβs connectβthere are many more developments I plan to make, and would definitely appreciate any help.
π§ We just implemented Andrej Karpathy's "third paradigm" for LLM learning!
System Prompt Learning (SPL) enables LLMs to automatically learn problem-solving strategies from experience, rather than relying on static prompts.
π How it works: Your LLM builds a database of effective strategies, selects the best ones for each problem, and refines them over time based on success rates.
The best part? All strategies are human-readable and the system gets progressively better at problem types you use frequently.
β¨ Key benefits: π Cumulative learning over time π Transparent, inspectable strategies π Works with any OpenAI-compatible API β‘ Simple integration: just add "spl-" prefix to your model
Built as an open-source plugin in optillm. After 500 queries, our system developed 129 strategies and refined 97 of them!
This feels like a genuine step toward AI that learns from experience while staying completely interpretable.
Introducing AutoThink: Adaptive reasoning for LLMs that improves performance by 43% on reasoning benchmarks!
Instead of using fixed thinking budgets, AutoThink: - Classifies query complexity (HIGH/LOW) using adaptive classification - Dynamically allocates thinking tokens based on complexity - Uses steering vectors derived from Pivotal Token Search to guide reasoning patterns
Results on DeepSeek-R1-Distill-Qwen-1.5B: - GPQA-Diamond: 31.06% vs 21.72% baseline (+9.34 points) - MMLU-Pro: 26.38% vs 25.58% baseline (+0.8 points) - Uses fewer tokens than baseline approaches
Works with any local reasoning model - DeepSeek, Qwen, Llama, custom models. The technique combines our research on Pivotal Token Search (PTS) implementation and adaptive classification frameworks.
𧬠Hey everyone! Just released **OpenEvolve** - an open-source implementation of Google DeepMind's AlphaEvolve system.
It's an evolutionary coding agent that uses LLMs to discover and optimize algorithms. I successfully replicated DeepMind's results on circle packing (99.97% match!) and evolved a random search into a simulated annealing algorithm.
β¨ Key features: - Evolves entire codebases (not just single functions) - Works with any OpenAI-compatible API - LLM ensemble approach for better results - Multi-objective optimization