GAPrune: Gradient-Alignment Pruning for Domain-Aware Embeddings
Abstract
GAPrune, a pruning framework that considers domain importance and general linguistic foundation, effectively compresses models while maintaining and enhancing domain-specific performance.
Domain-specific embedding models have shown promise for applications that require specialized semantic understanding, such as coding agents and financial retrieval systems, often achieving higher performance gains than general models. However, state-of-the-art embedding models are typically based on LLMs, which contain billions of parameters, making deployment challenging in resource-constrained environments. Model compression through pruning offers a promising solution, but existing pruning methods treat all parameters uniformly, failing to distinguish between general semantic representations and domain-specific patterns, leading to suboptimal pruning decisions. Thus, we propose GAPrune, a pruning framework that addresses this challenge by considering both domain importance and preserving general linguistic foundation. Our method uses Fisher Information to measure importance and general-domain gradient alignment to assess parameter behavior, then combines these signals using our Domain Alignment Importance (DAI) scoring. Lower DAI scores indicate that the parameter is either less important for the domain task or creates conflicts between domain and general objectives. Experiments on two domain benchmarks, FinMTEB and ChemTEB, show that GAPrune maintains performance within 2.5% of dense models in one-shot pruning at 50% sparsity, while outperforming all baselines. With retraining in 100 steps, GAPrune achieves +4.51% improvement on FinMTEB and +1.73% on ChemTEB, demonstrating that our pruning strategy not only preserves but enhances domain-specific capabilities. Our findings demonstrate that principled pruning strategies can achieve model compression and enhanced domain specialization, providing the research community with a new approach for development.
Community
Happy to share our work 'GAPrune: Gradient-Alignment Pruning for Domain-Aware Embeddings'
GAPrune is a novel pruning framework for domain-specific pruning of embedding models. It preserves both general linguistic foundations and domain-specific capabilities for developing smaller domain embedding models.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LMAR: Language Model Augmented Retriever for Domain-specific Knowledge Indexing (2025)
- Training LLMs to be Better Text Embedders through Bidirectional Reconstruction (2025)
- Z-Pruner: Post-Training Pruning of Large Language Models for Efficiency without Retraining (2025)
- FrEVL: Leveraging Frozen Pretrained Embeddings for Efficient Vision-Language Understanding (2025)
- Dropping Experts, Recombining Neurons: Retraining-Free Pruning for Sparse Mixture-of-Experts LLMs (2025)
- Uncertainty-driven Embedding Convolution (2025)
- LexSemBridge: Fine-Grained Dense Representation Enhancement through Token-Aware Embedding Augmentation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper