Mistral 7B V0.1 - Alpaca
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
import torch
tokenizer = AutoTokenizer.from_pretrained("satyajitghana/mistral-7b-v0.1-alpaca-chat")
model = AutoModelForCausalLM.from_pretrained(
"satyajitghana/mistral-7b-v0.1-alpaca-chat",
torch_dtype=torch.bfloat16,
device_map="auto"
)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
INPUT = """
### Instruction:
List 3 historical events related to the following country
### Input:
India
### Response:
"""
out = pipe(
INPUT,
max_new_tokens=200
)
print(out[0]['generated_text'])
- Downloads last month
- 14
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support