Fix: Ensure model is moved to same device as inputs in example code

#14

Description

The example code triggers the following runtime error during inference when using GPU:

RuntimeError: Expected all tensors to be on the same device, but got index is on cuda:0, different from other tensors on cpu.

Changes

Replaced:

model = AutoModelForSequenceClassification.from_pretrained(model_name)

with:

model = AutoModelForSequenceClassification.from_pretrained(model_name).to(device)

Testing

The code has been successfully tested and runs without error.

Note

This contribution is part of an ongoing research initiative to systematically identify and correct faulty example code in Hugging Face Model Cards.
We would appreciate a timely review and integration of this patch to support code reliability and enhance reproducibility for downstream users.

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment