Text Generation
Transformers
PyTorch
llama
text-generation-inference
Inference Endpoints
Edit model card

Mastermax Llama 7B

This is a a Llama2 7B base model that was fined tuned on additional datasets, in attempts improve performance.

How to use with HugginFace pipeline

from transformers import AutoModelForCausalLM, AutoTokenizer, pineline

model = AutoModelForCausalLM.from_pretrained(
          "lifeofcoding/mastermax-llama-7b",
          load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("lifeofcoding/mastermax-llama-7b", trust_remote_code=True)

# Generate text using the pipeline
pipe = pipeline(task="text-generation",
                model=model,
                tokenizer=tokenizer,
                max_length=200)

result = pipe(f"<s>[INST] {prompt} [/INST]")
generated_text = result[0]['generated_text']
Downloads last month
17
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train lifeofcoding/mastermax-llama-7b