You can also experiment with these models using my Monte Carlo Tree Search generation pipeline:
https://github.com/mkurman/mcts-pytorch
Mariusz Kurman PRO
mkurman
AI & ML interests
AI Tech Lead | MD
Recent Activity
updated
a model
27 minutes ago
mkurman/phi4-MedIT-10B-o1
replied to
their
post
about 9 hours ago
ReasonFlow π§
Are you fascinated by reasoning models? If so, you won't want to miss my latest project! I've implemented multiple path generations to supercharge the reasoning capabilities of O1-like models. Explore how this work can elevate your model in complex reasoning tasks!
https://github.com/mkurman/ReasonFlow
Use it with:
https://huggingface.co/mkurman/phi4-MedIT-10B-o1
- or -
https://huggingface.co/mkurman/llama-3.2-MEDIT-3B-o1
posted
an
update
about 9 hours ago
ReasonFlow π§
Are you fascinated by reasoning models? If so, you won't want to miss my latest project! I've implemented multiple path generations to supercharge the reasoning capabilities of O1-like models. Explore how this work can elevate your model in complex reasoning tasks!
https://github.com/mkurman/ReasonFlow
Use it with:
https://huggingface.co/mkurman/phi4-MedIT-10B-o1
- or -
https://huggingface.co/mkurman/llama-3.2-MEDIT-3B-o1
Organizations
mkurman's activity
replied to
their
post
about 9 hours ago
posted
an
update
about 9 hours ago
Post
168
ReasonFlow π§
Are you fascinated by reasoning models? If so, you won't want to miss my latest project! I've implemented multiple path generations to supercharge the reasoning capabilities of O1-like models. Explore how this work can elevate your model in complex reasoning tasks!
https://github.com/mkurman/ReasonFlow
Use it with:
mkurman/phi4-MedIT-10B-o1
- or -
mkurman/llama-3.2-MEDIT-3B-o1
Are you fascinated by reasoning models? If so, you won't want to miss my latest project! I've implemented multiple path generations to supercharge the reasoning capabilities of O1-like models. Explore how this work can elevate your model in complex reasoning tasks!
https://github.com/mkurman/ReasonFlow
Use it with:
mkurman/phi4-MedIT-10B-o1
- or -
mkurman/llama-3.2-MEDIT-3B-o1
reacted to
Jaward's
post with ππ₯
7 days ago
Post
1337
Huge AI win in medicineπ
"Large language of life model" just dropped!!
Full paper: https://www.nature.com/articles/s41586-024-08391-z
"Large language of life model" just dropped!!
Full paper: https://www.nature.com/articles/s41586-024-08391-z
reacted to
prithivMLmods's
post with ππ₯
12 days ago
Post
5852
Reasoning SmolLM2 π
π―Fine-tuning SmolLM2 on a lightweight synthetic reasoning dataset for reasoning-specific tasks. Future updates will focus on lightweight, blazing-fast reasoning models. Until then, check out the blog for fine-tuning details.
π₯Blog : https://huggingface.co/blog/prithivMLmods/smollm2-ft
πΌ Models :
+ SmolLM2-CoT-360M : prithivMLmods/SmolLM2-CoT-360M
+ Reasoning-SmolLM2-135M : prithivMLmods/Reasoning-SmolLM2-135M
+ SmolLM2-CoT-360M-GGUF : prithivMLmods/SmolLM2-CoT-360M-GGUF
π€ Other Details :
+ Demo : prithivMLmods/SmolLM2-CoT-360M
+ Fine-tune nB : prithivMLmods/SmolLM2-CoT-360M
π―Fine-tuning SmolLM2 on a lightweight synthetic reasoning dataset for reasoning-specific tasks. Future updates will focus on lightweight, blazing-fast reasoning models. Until then, check out the blog for fine-tuning details.
π₯Blog : https://huggingface.co/blog/prithivMLmods/smollm2-ft
πΌ Models :
+ SmolLM2-CoT-360M : prithivMLmods/SmolLM2-CoT-360M
+ Reasoning-SmolLM2-135M : prithivMLmods/Reasoning-SmolLM2-135M
+ SmolLM2-CoT-360M-GGUF : prithivMLmods/SmolLM2-CoT-360M-GGUF
π€ Other Details :
+ Demo : prithivMLmods/SmolLM2-CoT-360M
+ Fine-tune nB : prithivMLmods/SmolLM2-CoT-360M
reacted to
openfree's
post with π₯
13 days ago
Post
5162
# 𧬠Protein Genesis AI: Design Proteins with Just a Prompt
## π€ Current Challenges in Protein Design
Traditional protein design faces critical barriers:
- π° High costs ($1M - $10M+) & long development cycles (2-3 years)
- π¬ Complex equipment and expert knowledge required
- π Low success rates (<10%)
- β° Time-consuming experimental validation
## β¨ Our Solution: Protein Genesis AI
Transform protein design through simple natural language input:
### Key Features
- π€ AI-powered automated design
- π Real-time analysis & optimization
- π¬ Instant 3D visualization
- πΎ Immediate PDB file generation
## π― Applications
### Medical & Industrial
- π₯ Drug development
- π Antibody design
- π Industrial enzymes
- β»οΈ Environmental solutions
### Research & Education
- π¬ Basic research
- π Educational tools
- 𧫠Experimental design
- π Data analysis
## π« Key Advantages
- π¨βπ» No coding or technical expertise needed
- β‘ Results in minutes (vs. years)
- π° 90% cost reduction
- π Accessible anywhere
## π Who Needs This?
- π’ Biotech companies
- π₯ Pharmaceutical research
- π Academic institutions
- π§ͺ Research laboratories
## π Why It Matters
Protein Genesis AI democratizes protein design by transforming complex processes into simple text prompts. This breakthrough accelerates scientific discovery, potentially leading to faster drug development and innovative biotechnology solutions. The future of protein design starts with a simple prompt! π
openfree/ProteinGenesis
## π€ Current Challenges in Protein Design
Traditional protein design faces critical barriers:
- π° High costs ($1M - $10M+) & long development cycles (2-3 years)
- π¬ Complex equipment and expert knowledge required
- π Low success rates (<10%)
- β° Time-consuming experimental validation
## β¨ Our Solution: Protein Genesis AI
Transform protein design through simple natural language input:
"Design a protein that targets cancer cells"
"Create an enzyme that breaks down plastic"
### Key Features
- π€ AI-powered automated design
- π Real-time analysis & optimization
- π¬ Instant 3D visualization
- πΎ Immediate PDB file generation
## π― Applications
### Medical & Industrial
- π₯ Drug development
- π Antibody design
- π Industrial enzymes
- β»οΈ Environmental solutions
### Research & Education
- π¬ Basic research
- π Educational tools
- 𧫠Experimental design
- π Data analysis
## π« Key Advantages
- π¨βπ» No coding or technical expertise needed
- β‘ Results in minutes (vs. years)
- π° 90% cost reduction
- π Accessible anywhere
## π Who Needs This?
- π’ Biotech companies
- π₯ Pharmaceutical research
- π Academic institutions
- π§ͺ Research laboratories
## π Why It Matters
Protein Genesis AI democratizes protein design by transforming complex processes into simple text prompts. This breakthrough accelerates scientific discovery, potentially leading to faster drug development and innovative biotechnology solutions. The future of protein design starts with a simple prompt! π
openfree/ProteinGenesis
reacted to
singhsidhukuldeep's
post with π
13 days ago
Post
3390
Exciting breakthrough in e-commerce recommendation systems!
Walmart Global Tech researchers have developed a novel Triple Modality Fusion (TMF) framework that revolutionizes how we make product recommendations.
>> Key Innovation
The framework ingeniously combines three distinct data types:
- Visual data to capture product aesthetics and context
- Textual information for detailed product features
- Graph data to understand complex user-item relationships
>> Technical Architecture
The system leverages a Large Language Model (Llama2-7B) as its backbone and introduces several sophisticated components:
Modality Fusion Module
- All-Modality Self-Attention (AMSA) for unified representation
- Cross-Modality Attention (CMA) mechanism for deep feature integration
- Custom FFN adapters to align different modality embeddings
Advanced Training Strategy
- Curriculum learning approach with three complexity levels
- Parameter-Efficient Fine-Tuning using LoRA
- Special token system for behavior and item representation
>> Real-World Impact
The results are remarkable:
- 38.25% improvement in Electronics recommendations
- 43.09% boost in Sports category accuracy
- Significantly higher human evaluation scores compared to traditional methods
Currently deployed in Walmart's production environment, this research demonstrates how combining multiple data modalities with advanced LLM architectures can dramatically improve recommendation accuracy and user satisfaction.
Walmart Global Tech researchers have developed a novel Triple Modality Fusion (TMF) framework that revolutionizes how we make product recommendations.
>> Key Innovation
The framework ingeniously combines three distinct data types:
- Visual data to capture product aesthetics and context
- Textual information for detailed product features
- Graph data to understand complex user-item relationships
>> Technical Architecture
The system leverages a Large Language Model (Llama2-7B) as its backbone and introduces several sophisticated components:
Modality Fusion Module
- All-Modality Self-Attention (AMSA) for unified representation
- Cross-Modality Attention (CMA) mechanism for deep feature integration
- Custom FFN adapters to align different modality embeddings
Advanced Training Strategy
- Curriculum learning approach with three complexity levels
- Parameter-Efficient Fine-Tuning using LoRA
- Special token system for behavior and item representation
>> Real-World Impact
The results are remarkable:
- 38.25% improvement in Electronics recommendations
- 43.09% boost in Sports category accuracy
- Significantly higher human evaluation scores compared to traditional methods
Currently deployed in Walmart's production environment, this research demonstrates how combining multiple data modalities with advanced LLM architectures can dramatically improve recommendation accuracy and user satisfaction.
reacted to
Sri-Vigneshwar-DJ's
post with π₯
13 days ago
Post
2331
Combining smolagents with Anthropicβs best practices simplifies building powerful AI agents:
1. Code-Based Agents: Write actions as Python code, reducing steps by 30%.
2. Prompt Chaining: Break tasks into sequential subtasks with validation gates.
3. Routing: Classify inputs and direct them to specialized handlers.
4. Fallback: Handle tasks even if classification fails.
https://huggingface.co/blog/Sri-Vigneshwar-DJ/building-effective-agents-with-anthropics-best-pra
1. Code-Based Agents: Write actions as Python code, reducing steps by 30%.
2. Prompt Chaining: Break tasks into sequential subtasks with validation gates.
3. Routing: Classify inputs and direct them to specialized handlers.
4. Fallback: Handle tasks even if classification fails.
https://huggingface.co/blog/Sri-Vigneshwar-DJ/building-effective-agents-with-anthropics-best-pra
reacted to
ezgikorkmaz's
post with π₯
13 days ago
Post
1893
If you are interested in adversarial deep reinforcement learning find the compact reading list below:
https://github.com/EzgiKorkmaz/adversarial-reinforcement-learning
https://github.com/EzgiKorkmaz/adversarial-reinforcement-learning
posted
an
update
14 days ago
Post
1884
I kindly invite you to try my experimental Llama 3.2 3B with o1-like thinking.
It utilizes Thoughts when needed, so don't be surprised when it's not. It also has a minor bug that requires further fine-tuning (sometimes it starts with the <|python_tag|> instead of <Thought>).
Enjoy!
Give some likes and whatever to make me feel better and motivated to keep going π
mkurman/llama-3.2-MEDIT-3B-o1
It utilizes Thoughts when needed, so don't be surprised when it's not. It also has a minor bug that requires further fine-tuning (sometimes it starts with the <|python_tag|> instead of <Thought>).
Enjoy!
Give some likes and whatever to make me feel better and motivated to keep going π
mkurman/llama-3.2-MEDIT-3B-o1
reacted to
reddgr's
post with π
about 1 month ago
Post
1850
Thought it would only make sense to share this here. Lately, one of my favorite activities has been annotating prompts and putting them into datasets (
reddgr/tl-test-learn-prompts
reddgr/rq-request-question-prompts
reddgr/nli-chatbot-prompt-categorization), which I then use to classify and select chatbot conversations for my website. It's quite fun to use this widget on the
lmsys/lmsys-chat-1m, but I also use it on my 2 years of talking to chatbots (soon to be dataset, but still a lot of web scraping and ETL work left)... This one in the picture was probably one of the first prompts I wrote to an LLM:
posted
an
update
about 1 month ago
Post
346
How Do I Contribute (HDIC)
Exciting times to come? We are working on a layer self-esteem technique to score their contribution to the final prediction. For now, it unlocks a lot of knowledge already stored in weights we couldn't force the model to extract by further fine-tuning!
Exciting times to come? We are working on a layer self-esteem technique to score their contribution to the final prediction. For now, it unlocks a lot of knowledge already stored in weights we couldn't force the model to extract by further fine-tuning!
reacted to
AdinaY's
post with π₯
about 1 month ago
Post
1348
HunyuanVideo πΉ The new open video generation model by Tencent!
π tencent/HunyuanVideo
zh-ai-community/video-models-666afd86cfa4e4dd1473b64c
β¨ 13B parameters: Probably the largest open video model to date
β¨ Unified architecture for image & video generation
β¨ Powered by advanced features: MLLM Text Encoder, 3D VAE, and Prompt Rewrite
β¨ Delivers stunning visuals, diverse motion, and unparalleled stability
π Fully open with code & weights
π tencent/HunyuanVideo
zh-ai-community/video-models-666afd86cfa4e4dd1473b64c
β¨ 13B parameters: Probably the largest open video model to date
β¨ Unified architecture for image & video generation
β¨ Powered by advanced features: MLLM Text Encoder, 3D VAE, and Prompt Rewrite
β¨ Delivers stunning visuals, diverse motion, and unparalleled stability
π Fully open with code & weights
reacted to
singhsidhukuldeep's
post with π€
about 1 month ago
Post
1315
Exciting breakthrough in Document AI! Researchers from UNC Chapel Hill and Bloomberg have developed M3DocRAG, a revolutionary framework for multi-modal document understanding.
The innovation lies in its ability to handle complex document scenarios that traditional systems struggle with:
- Process 40,000+ pages across 3,000+ documents
- Answer questions requiring information from multiple pages
- Understand visual elements like charts, tables, and figures
- Support both closed-domain (single document) and open-domain (multiple documents) queries
Under the hood, M3DocRAG operates through three sophisticated stages:
>> Document Embedding:
- Converts PDF pages to RGB images
- Uses ColPali to project both text queries and page images into a shared embedding space
- Creates dense visual embeddings for each page while maintaining visual information integrity
>> Page Retrieval:
- Employs MaxSim scoring to compute relevance between queries and pages
- Implements inverted file indexing (IVFFlat) for efficient search
- Reduces retrieval latency from 20s to under 2s when searching 40K+ pages
- Supports approximate nearest neighbor search via Faiss
>> Question Answering:
- Leverages Qwen2-VL 7B as the multi-modal language model
- Processes retrieved pages through a visual encoder
- Generates answers considering both textual and visual context
The results are impressive:
- State-of-the-art performance on MP-DocVQA benchmark
- Superior handling of non-text evidence compared to text-only systems
- Significantly better performance on multi-hop reasoning tasks
This is a game-changer for industries dealing with large document volumesβfinance, healthcare, and legal sectors can now process documents more efficiently while preserving crucial visual context.
The innovation lies in its ability to handle complex document scenarios that traditional systems struggle with:
- Process 40,000+ pages across 3,000+ documents
- Answer questions requiring information from multiple pages
- Understand visual elements like charts, tables, and figures
- Support both closed-domain (single document) and open-domain (multiple documents) queries
Under the hood, M3DocRAG operates through three sophisticated stages:
>> Document Embedding:
- Converts PDF pages to RGB images
- Uses ColPali to project both text queries and page images into a shared embedding space
- Creates dense visual embeddings for each page while maintaining visual information integrity
>> Page Retrieval:
- Employs MaxSim scoring to compute relevance between queries and pages
- Implements inverted file indexing (IVFFlat) for efficient search
- Reduces retrieval latency from 20s to under 2s when searching 40K+ pages
- Supports approximate nearest neighbor search via Faiss
>> Question Answering:
- Leverages Qwen2-VL 7B as the multi-modal language model
- Processes retrieved pages through a visual encoder
- Generates answers considering both textual and visual context
The results are impressive:
- State-of-the-art performance on MP-DocVQA benchmark
- Superior handling of non-text evidence compared to text-only systems
- Significantly better performance on multi-hop reasoning tasks
This is a game-changer for industries dealing with large document volumesβfinance, healthcare, and legal sectors can now process documents more efficiently while preserving crucial visual context.
reacted to
cfahlgren1's
post with π₯
about 1 month ago
Post
1933
You can just ask things π£οΈ
"show me messages in the coding category that are in the top 10% of reward model scores"
Download really high quality instructions from the Llama3.1 405B synthetic dataset π₯
argilla/magpie-ultra-v1.0
"show me messages in the coding category that are in the top 10% of reward model scores"
Download really high quality instructions from the Llama3.1 405B synthetic dataset π₯
argilla/magpie-ultra-v1.0
replied to
their
post
about 1 month ago
That is an excellent question. I was just googling and searching in Arxiv. Now, I try Elicit, βtalkβ with papers and listen to βpodcastsβ on NotebookLM.
replied to
their
post
about 2 months ago
Thanks!
reacted to
AdinaY's
post with β€οΈ
about 2 months ago
Post
1485
2023 & 2024 Top Downloaded (all time) Open Models on the hub are both from the Chinese community π
2023 π Bge base by BAAI
BAAI/bge-base-en-v1.5
2024 π Qwen 2.5 by Alibaba Qwen
Qwen/Qwen2.5-1.5B-Instruct
Canβt wait to see what incredible models the Chinese community will bring in 2025π
β¨ Follow https://huggingface.co/zh-ai-community to get the latest updates from the Chinese community
β¨ Explore the 2024 Year in Review huggingface/open-source-ai-year-in-review-2024
2023 π Bge base by BAAI
BAAI/bge-base-en-v1.5
2024 π Qwen 2.5 by Alibaba Qwen
Qwen/Qwen2.5-1.5B-Instruct
Canβt wait to see what incredible models the Chinese community will bring in 2025π
β¨ Follow https://huggingface.co/zh-ai-community to get the latest updates from the Chinese community
β¨ Explore the 2024 Year in Review huggingface/open-source-ai-year-in-review-2024
reacted to
prithivMLmods's
post with β€οΈ
about 2 months ago
Post
2645
Milestone for Flux.1 Dev π₯
π’The Flux.1 Dev model has crossed 1οΈβ£0οΈβ£,0οΈβ£0οΈβ£0οΈβ£ creative public adapters! π
π https://huggingface.co/models?other=base_model:adapter:black-forest-labs/FLUX.1-dev
π’This includes:
- 266 Finetunes
- 19 Quants
- 4 Merges
π’ Hereβs the 10,000th public adapter : π
+ strangerzonehf/Flux-3DXL-Partfile-0006
π’ Page :
+ https://huggingface.co/strangerzonehf
π’ Collection :
+ prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be
π’The Flux.1 Dev model has crossed 1οΈβ£0οΈβ£,0οΈβ£0οΈβ£0οΈβ£ creative public adapters! π
π https://huggingface.co/models?other=base_model:adapter:black-forest-labs/FLUX.1-dev
π’This includes:
- 266 Finetunes
- 19 Quants
- 4 Merges
π’ Hereβs the 10,000th public adapter : π
+ strangerzonehf/Flux-3DXL-Partfile-0006
π’ Page :
+ https://huggingface.co/strangerzonehf
π’ Collection :
+ prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be