When Punctuation Matters: A Large-Scale Comparison of Prompt Robustness Methods for LLMs
Abstract
A systematic evaluation of prompt robustness methods for Large Language Models across various models and tasks reveals insights into their effectiveness against format perturbations.
Large Language Models (LLMs) are highly sensitive to subtle, non-semantic variations in prompt phrasing and formatting. In this work, we present the first systematic evaluation of 5 methods for improving prompt robustness within a unified experimental framework. We benchmark these techniques on 8 models from Llama, Qwen and Gemma families across 52 tasks from Natural Instructions dataset. Our evaluation covers robustness methods from both fine-tuned and in-context learning paradigms, and tests their generalization against multiple types of distribution shifts. Finally, we extend our analysis to GPT-4.1 and DeepSeek V3 to assess frontier models' current robustness to format perturbations. Our findings offer actionable insights into the relative effectiveness of these robustness methods, enabling practitioners to make informed decisions when aiming for stable and reliable LLM performance in real-world applications. Code: https://github.com/AIRI-Institute/when-punctuation-matters.
Community
We are hoping to set ground for systematic development and evaluation of robustness-enhancing methods for LLMs.
Curious to hear your feedback!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- CAAD: Context-Aware Adaptive Decoding for Truthful Text Generation (2025)
- Context Tuning for In-Context Optimization (2025)
- Impact of Fine-Tuning Methods on Memorization in Large Language Models (2025)
- Where to show Demos in Your Prompt: A Positional Bias of In-Context Learning (2025)
- Labels or Input? Rethinking Augmentation in Multimodal Hate Detection (2025)
- When Scale Meets Diversity: Evaluating Language Models on Fine-Grained Multilingual Claim Verification (2025)
- Complexity-aware fine-tuning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper