mistralai/Mistral-7B-v0.1

rl4b

This is a model based on Mistral-7B intented to generate specialized queries from natural language queries.

Usage

To leverage rl4b for generating insights or responses, you can use the following code snippet:

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("fracarfer5/rl4b")
model = AutoModelForCausalLM.from_pretrained(
  "fracarfer5/rl4b",
)
Downloads last month
3
GGUF
Model size
7.24B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support