Should We Still Pretrain Encoders with Masked Language Modeling? Paper • 2507.00994 • Published 28 days ago • 74
MMTEB: Massive Multilingual Text Embedding Benchmark Paper • 2502.13595 • Published Feb 19 • 38
EuroBERT: Scaling Multilingual Encoders for European Languages Paper • 2503.05500 • Published Mar 7 • 81
Is Preference Alignment Always the Best Option to Enhance LLM-Based Translation? An Empirical Analysis Paper • 2409.20059 • Published Sep 30, 2024 • 17
Towards Trustworthy Reranking: A Simple yet Effective Abstention Mechanism Paper • 2402.12997 • Published Feb 20, 2024 • 9