Use below code to download the mistral.


#pip install -U transformers accelerate torch

import torch
from transformers import pipeline, set_seed

model_path = "vicky4s4s/mistral-7b-v2-instruct"

pipe = pipeline("text-generation", model=model_path, torch_dtype=torch.bfloat16, device_map="cuda")
messages = [{"role": "user", "content": "what is meaning of life?"}]
outputs = pipe(messages, max_new_tokens=1000, do_sample=True, temperature=0.71, top_k=50, top_p=0.92,repetition_penalty=1)
print(outputs[0]["generated_text"][-1]["content"])

Limitations

The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.

Develop By

Vignesh, [email protected]

Downloads last month
6
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.