Papers
arxiv:2510.03528

Fine-Tuning on Noisy Instructions: Effects on Generalization and Performance

Published on Oct 3
· Submitted by Xingwei Tan on Oct 7
Authors:

Abstract

Introducing perturbations in instruction-tuning data can enhance large language models' resistance to noisy instructions and improve performance on benchmarks.

AI-generated summary

Instruction-tuning plays a vital role in enhancing the task-solving abilities of large language models (LLMs), improving their usability in generating helpful responses on various tasks. However, previous work has demonstrated that they are sensitive to minor variations in instruction phrasing. In this paper, we explore whether introducing perturbations in instruction-tuning data can enhance LLMs' resistance against noisy instructions. We focus on how instruction-tuning with perturbations, such as removing stop words or shuffling words, affects LLMs' performance on the original and perturbed versions of widely-used benchmarks (MMLU, BBH, GSM8K). We further assess learning dynamics and potential shifts in model behavior. Surprisingly, our results suggest that instruction-tuning on perturbed instructions can, in some cases, improve downstream performance. These findings highlight the importance of including perturbed instructions in instruction-tuning, which can make LLMs more resilient to noisy user inputs.

Community

Paper author Paper submitter

Instruction-tuning is crucial for enhancing large language models’ (LLMs) ability to follow tasks and generate useful responses. Yet, prior work shows that LLMs remain sensitive to small variations in instruction phrasing. This paper investigates whether introducing perturbations during instruction-tuning can improve robustness to noisy inputs. We apply perturbations such as stop-word removal and word shuffling and evaluate performance on original and perturbed versions of MMLU, BBH, and GSM8K. Our results show that tuning on perturbed instructions can, in some cases, enhance downstream performance and stability. These findings suggest that incorporating controlled noise in instruction-tuning may yield more resilient and adaptable LLMs.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2510.03528 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.03528 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.03528 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.