YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Tulu-v2-7B-QLora Model

    This model is a LoRA fine-tuned version of LLaMA-2-7B using the Tulu-v2 dataset.

    ## Model Details
    - Base Model: LLaMA-2-7B
    - Training Data: Tulu-v2 dataset
    - Training Method: QLoRA
    - Quantization: 4-bit NF4
    - LoRA rank: 64
    - LoRA alpha: 16

    ## Usage
    ```python
    import torch
    from transformers import AutoModelForCausalLM, AutoTokenizer
    from peft import PeftModel, PeftConfig

    model = AutoModelForCausalLM.from_pretrained(
        "meta-llama/Llama-2-7b-hf",
        load_in_4bit=True,
        torch_dtype=torch.float16,
        device_map="auto"
    )

    model = PeftModel.from_pretrained(
        model,
        "Renee0v0/Llama-2-7b-hf-tulu_v2_7b_qlora_800",
        device_map="auto"
    )

    tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")

    License
    This model inherits the license of the base model (LLaMA 2).
    
Downloads last month
37
Safetensors
Model size
3.76B params
Tensor type
F32
·
U8
·
Inference API
Unable to determine this model's library. Check the docs .