Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
2
1
12
Maminisese
minnmamin
Follow
LeroyDyer's profile picture
21world's profile picture
2 followers
·
3 following
AI & ML interests
None yet
Recent Activity
reacted
to
Kseniase
's
post
with 👍
20 days ago
13 New types of LoRA LoRA (Low-Rank Adaptation) is a popular lightweight method for fine-tuning AI models. It doesn't update the full model, it adds small trainable components, low-rank matrices, while keeping the original weights frozen. Only these adapters are trained. Recently, many interesting new LoRA variations came out, so it’s a great time to take a look at these 13 clever approaches: 1. T-LoRA → https://huggingface.co/papers/2507.05964 A timestep-dependent LoRA method for adapting diffusion models with a single image. It dynamically adjusts updates and uses orthogonal initialization to reduce overlap, achieving better fidelity–alignment balance than standard LoRA 2. SingLoRA → https://huggingface.co/papers/2507.05566 Simplifies LoRA by using only one small matrix instead of usual two, and multiplying it by its own transpose (like A × Aᵀ). It uses half the parameters of LoRA and avoids scale mismatch between different matrices 3. LiON-LoRA → https://huggingface.co/papers/2507.05678 Improves control and precision in video diffusion models when training data is limited. It builds on LoRA, adding 3 key principles: linear scalability, orthogonality, and norm consistency. A controllable token and modified self-attention enables smooth adjustment of motion 4. LoRA-Mixer → https://huggingface.co/papers/2507.00029 Combines LoRA and mixture-of-experts (MoE) to adapt LLMs for multiple tasks. It dynamically routes task-specific LoRA experts into linear projections of attention modules, supporting both joint training and frozen expert reuse 5. QR-LoRA → https://huggingface.co/papers/2507.04599 Separates content and style when combining multiple LoRA adapters. It implements QR decomposition to structure parameter updates, where the orthogonal Q matrix reduces interference between features, and the R matrix captures specific transformations Read further in the comments 👇 If you like it, also subscribe to the Turing Post: https://www.turingpost.com/subscribe
upvoted
a
collection
25 days ago
Text diffusion
liked
a dataset
about 2 months ago
nvidia/dynpose-100k
View all activity
Organizations
minnmamin
's models
5
Sort: Recently updated
minnmamin/testoverfit
Text Classification
•
0.1B
•
Updated
Aug 21, 2024
•
11
minnmamin/reward_modeling_anthropic_hh
Text Classification
•
0.1B
•
Updated
Aug 21, 2024
•
14
minnmamin/vicuna-13b-carnarie
Text Generation
•
Updated
Aug 2, 2023
•
2
minnmamin/llmmosaic
Updated
Jul 16, 2023
minnmamin/minnie
Updated
Jul 11, 2023