Kuldeep Singh Sidhu
singhsidhukuldeep
AI & ML interests
😃 TOP 3 on HuggingFace for posts 🤗 Seeking contributors for a completely open-source 🚀 Data Science platform! singhsidhukuldeep.github.io
Recent Activity
posted
an
update
about 24 hours ago
Groundbreaking Research Alert: Can Large Language Models Really Understand Personal Preferences?
A fascinating new study from researchers at University of Notre Dame, Xi'an Jiaotong University, and Université de Montréal introduces PERRECBENCH - a novel benchmark for evaluating how well Large Language Models (LLMs) understand user preferences in recommendation systems.
Key Technical Insights:
- The benchmark eliminates user rating bias and item quality factors by using relative ratings and grouped ranking approaches
- Implements three distinct ranking methods: pointwise rating prediction, pairwise comparison, and listwise ranking
- Evaluates 19 state-of-the-art LLMs including Claude-3.5, GPT-4, Llama-3, Mistral, and Qwen models
- Uses Kendall's tau correlation to measure ranking accuracy
- Incorporates BM25 retriever with configurable history items (k=4 by default)
Notable Findings:
- Current LLMs struggle with true personalization, achieving only moderate correlation scores
- Larger models don't always perform better - challenging conventional scaling laws
- Pairwise and listwise ranking methods outperform pointwise approaches
- Open-source models like Mistral-123B and Llama-3-405B compete well with proprietary models
- Weight merging strategy shows promise for improving personalization capabilities
The research reveals that while LLMs excel at many tasks, they still face significant challenges in understanding individual user preferences. This work opens new avenues for improving personalized recommendation systems and highlights the importance of developing better evaluation methods.
A must-read for anyone interested in LLMs, recommender systems, or personalization technology. The team has made their benchmark and code publicly available for further research.
posted
an
update
3 days ago
While everyone is buzzing about DeepSeek AI R1's groundbreaking open-source release, ByteDance has quietly launched something remarkable - Trae, an adaptive AI IDE that's redefining the development experience and unlike competitors like Cursor, it' completely FREE!
Trae is a sophisticated development environment built on Microsoft's VSCode foundation(with a nice skin on top), offering unlimited free access to both OpenAI's GPT-4o and Anthropic's Claude-3.5-Sonnet models.
Technical Highlights:
- Real-time AI pair programming with comprehensive codebase understanding
- Natural language commands for code generation and project-level development
- Intelligent task decomposition for automated planning and execution
- Seamless VS Code and Cursor configuration compatibility
- Multi-language support with specialized optimization for English and Chinese interfaces
Currently available for macOS (Windows version in development), Trae is distributed through ByteDance's Singapore subsidiary, Spring (SG) Pte. What sets it apart is its ability to handle mixed-language workflows and enhanced localization features that address common pain points in existing IDEs.
The AI assistant can generate code snippets, optimize logic, and even create entire projects from scratch through natural language prompts. It also features an innovative AI Chat system accessible via keyboard shortcuts for real-time coding assistance.
For developers looking to enhance their productivity without breaking the bank, Trae offers enterprise-grade AI capabilities completely free during its initial release. This move by ByteDance signals a significant shift in the AI IDE landscape, challenging established players with a robust, accessible alternative.
Try it at trae.ai
posted
an
update
4 days ago
Exciting Research Alert: Revolutionizing Long-Context Language Models!
A groundbreaking paper from researchers at University of Edinburgh and Apple introduces ICR² (In-context Retrieval and Reasoning), addressing a critical challenge in long-context language models (LCLMs).
Key Innovations:
- A novel benchmark that realistically evaluates LCLMs' ability to process and reason with extended contexts
- Three innovative approaches that significantly improve LCLM performance:
- Retrieve-then-generate fine-tuning
- Retrieval-attention probing
- Joint retrieval head training
The most impressive result? Their best approach, implemented on Mistral-7B with just 32K token limit, achieves performance comparable to GPT-4 while using significantly fewer parameters.
Technical Deep Dive:
The team's approach leverages attention head mechanisms to filter and denoise long contexts during decoding. Their retrieve-then-generate method implements a two-step process where the model first identifies relevant passages before generating responses. The architecture includes dedicated retrieval heads working alongside generation heads, enabling joint optimization during training.
What sets this apart is their innovative use of the Gumbel-TopK trick for differentiable retrieval and their sophisticated attention probing mechanism that identifies and utilizes retrieval-focused attention heads.
Impact:
This research fundamentally changes how we approach long-context processing in LLMs, offering a more efficient alternative to traditional RAG pipelines while maintaining high performance.
Organizations
singhsidhukuldeep's activity
Update Request
2
#2 opened 2 months ago
by
singhsidhukuldeep
The model can be started using vllm, but no dialogue is possible.
3
#2 opened 6 months ago
by
SongXiaoMao
Adding chat_template to tokenizer_config.json file
1
#3 opened 6 months ago
by
singhsidhukuldeep
Script request
3
#1 opened 6 months ago
by
singhsidhukuldeep
Requesting script
#1 opened 6 months ago
by
singhsidhukuldeep