Safetensors
qwen2

SweRankLLM-Large is a 32B LLM based on Qwen2.5-32B-Instruct finetuned for listwise code-reranking. When combined with performant code retrievers like SweRankEmbed, it significantly enhances the quality of results for software issue localization.

The model has been trained on large-scale issue localization data collected from public python github repositories. Check out our blog post and paper for more details!

We release the scripts to evaluate our model's performance here.

Citation

If you find this model work useful in your research, please consider citing our paper:

@article{reddy2025swerank,
  title={SweRank: Software Issue Localization with Code Ranking},
  author={Reddy, Revanth Gangi and Suresh, Tarun and Doo, JaeHyeok and Liu, Ye and Nguyen, Xuan Phi and Zhou, Yingbo and Yavuz, Semih and Xiong, Caiming and Ji, Heng and Joty, Shafiq},
  journal={arXiv preprint arXiv:2505.07849},
  year={2025}
}
Downloads last month
0
Safetensors
Model size
32.8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Salesforce/SweRankLLM-Large

Base model

Qwen/Qwen2.5-32B
Finetuned
(235)
this model

Collection including Salesforce/SweRankLLM-Large