ValueError and OSError

#66
by RWTH-A - opened

After downloading the model via huggingface-cli download meta-llama/Llama-3.3-70B-Instruct --include "original/*" --local-dir meta-llama/Llama-3.3-70B-Instruct. I updated all the needed libraries with pip install --upgrade transformers huggingface_hub torch torchaudio torchvision.

Still I get the Error ValueError: Could not load model meta-llama\Llama-3.3-70B-Instruct with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForCausalLM'>,) and OSError: Error no file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory meta-llama\Llama-3.3-70B-Instruct for following Code:

import transformers
import torch

model_id = "meta-llama\Llama-3.3-70B-Instruct"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
    {"role": "user", "content": "Who are you?"},
]

outputs = pipeline(
    messages,
    max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])

When using the tokenizer with tokenizer = AutoTokenizer.from_pretrained(model, token=HF_TOKEN) I get the following Error response: OSError: Can't load tokenizer for 'meta-llama\Llama-3.3-70B-Instruct'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'meta-llama\Llama-3.3-70B-Instruct' is the correct path to a directory containing all relevant files for a LlamaTokenizerFast tokenizer.

My libraries are

accelerate         1.2.1
bitsandbytes       0.45.0
certifi            2024.12.14
charset-normalizer 3.4.1
colorama           0.4.6
filelock           3.13.1
fsspec             2024.2.0
huggingface-hub    0.27.1
idna               3.10
Jinja2             3.1.3
MarkupSafe         2.1.5
mpmath             1.3.0
networkx           3.2.1
numpy              1.26.3
packaging          24.2
pillow             10.2.0
pip                24.3.1
psutil             6.1.1
PyYAML             6.0.2
regex              2024.11.6
requests           2.32.3
safetensors        0.5.2
setuptools         70.0.0
sympy              1.13.1
tokenizers         0.21.0
torch              2.5.1+cu124
torchaudio         2.5.1+cu124
torchvision        0.20.1+cu124
tqdm               4.67.1
transformers       4.48.0
typing_extensions  4.9.0
urllib3            2.3.0

Is there anyone who can help me to fix this problem?

Sign up or log in to comment