Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
2
1
12
Maminisese
minnmamin
Follow
21world's profile picture
LeroyDyer's profile picture
2 followers
·
3 following
AI & ML interests
None yet
Recent Activity
reacted
to
Kseniase
's
post
with 👍
20 days ago
13 New types of LoRA LoRA (Low-Rank Adaptation) is a popular lightweight method for fine-tuning AI models. It doesn't update the full model, it adds small trainable components, low-rank matrices, while keeping the original weights frozen. Only these adapters are trained. Recently, many interesting new LoRA variations came out, so it’s a great time to take a look at these 13 clever approaches: 1. T-LoRA → https://huggingface.co/papers/2507.05964 A timestep-dependent LoRA method for adapting diffusion models with a single image. It dynamically adjusts updates and uses orthogonal initialization to reduce overlap, achieving better fidelity–alignment balance than standard LoRA 2. SingLoRA → https://huggingface.co/papers/2507.05566 Simplifies LoRA by using only one small matrix instead of usual two, and multiplying it by its own transpose (like A × Aᵀ). It uses half the parameters of LoRA and avoids scale mismatch between different matrices 3. LiON-LoRA → https://huggingface.co/papers/2507.05678 Improves control and precision in video diffusion models when training data is limited. It builds on LoRA, adding 3 key principles: linear scalability, orthogonality, and norm consistency. A controllable token and modified self-attention enables smooth adjustment of motion 4. LoRA-Mixer → https://huggingface.co/papers/2507.00029 Combines LoRA and mixture-of-experts (MoE) to adapt LLMs for multiple tasks. It dynamically routes task-specific LoRA experts into linear projections of attention modules, supporting both joint training and frozen expert reuse 5. QR-LoRA → https://huggingface.co/papers/2507.04599 Separates content and style when combining multiple LoRA adapters. It implements QR decomposition to structure parameter updates, where the orthogonal Q matrix reduces interference between features, and the R matrix captures specific transformations Read further in the comments 👇 If you like it, also subscribe to the Turing Post: https://www.turingpost.com/subscribe
upvoted
a
collection
25 days ago
Text diffusion
liked
a dataset
about 2 months ago
nvidia/dynpose-100k
View all activity
Organizations
minnmamin
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
Articles
liked
3 datasets
about 2 months ago
nvidia/dynpose-100k
Updated
May 12
•
668
•
40
virattt/financial-qa-10K
Viewer
•
Updated
May 31, 2024
•
7k
•
623
•
91
OpenBuddy/R1-0528-Distill
Viewer
•
Updated
Jun 10
•
29.2k
•
102
•
9
liked
4 datasets
2 months ago
a-m-team/AM-Thinking-v1-RL-Dataset
Viewer
•
Updated
May 21
•
54.8k
•
944
•
15
a-m-team/AM-DeepSeek-R1-Distilled-1.4M
Preview
•
Updated
Mar 30
•
1.5k
•
152
a-m-team/AM-DeepSeek-Distilled-40M
Viewer
•
Updated
May 10
•
11.5M
•
12.4k
•
44
a-m-team/AM-Thinking-v1-Distilled
Preview
•
Updated
Jun 12
•
1.4k
•
38
liked
a model
4 months ago
DAMO-NLP-SG/VideoLLaMA3-7B
Visual Question Answering
•
8B
•
Updated
Mar 20
•
94k
•
63
liked
a model
6 months ago
Qwen/Qwen2.5-14B-Instruct-1M
Text Generation
•
15B
•
Updated
Jan 29
•
21.2k
•
•
316
liked
a model
8 months ago
Skywork/Skywork-o1-Open-Llama-3.1-8B
Text Generation
•
8B
•
Updated
May 14
•
949
•
•
114
liked
2 datasets
8 months ago
O1-OPEN/OpenO1-SFT
Viewer
•
Updated
Apr 22
•
77.7k
•
590
•
377
aporia-ai/rag_hallucinations
Viewer
•
Updated
Aug 29, 2024
•
1k
•
58
•
8