some smalle problems

#1
by NeedanAwP - opened

im using a 3070laptop, so i can only figure out issues on cuda device

if device == "cuda":
    model = LlamaForCausalLM.from_pretrained(
        BASE_MODEL,
        load_in_8bit=LOAD_8BIT,
        torch_dtype=torch.float16,
        device_map="cuda",
    )
    model = PeftModel.from_pretrained(
        model,
        LORA_WEIGHTS,
        torch_dtype=torch.float16,
        offload_folder=r'mrzlab630/offload',
    )

if set device_map='auto', it will automatically transport the model to cpu, and cause some float calculate problems.
peft's latest version need an offload_folder if you dont have enough ram to run this model.
the warning from peft said need an offload_dir for something, but offload_dir is useless and wont cause any mistake.

Sign up or log in to comment