Model Card

Add more information here

Example Usage

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, Conversation

tokenizer = AutoTokenizer.from_pretrained('fineinstructions/query_templatizer', revision=None) # Load tokenizer
tokenizer.padding_side = 'left'
model = AutoModelForCausalLM.from_pretrained('fineinstructions/query_templatizer_full_end_to_end_s2', revision=None) # Load model
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, pad_token_id=tokenizer.pad_token_id, return_full_text=False)

inputs = ["What volleyball exercises should I do I'm almost in high school and i do volleyball excellence five times a week (basically an advanced class in school with experienced volleyball coaches) , we have 2-3 skill training sessions a week which i feel like isn't enough for me as I would like to improve my skills almost every day.\n\n​\n\nWhat i wanted to know was what setting, digging, serving and spiking exercises could i do that would help me improve all of my skills (I have a large area to practice all these things so space isn't an issue)."]
prompts = [tokenizer.apply_chat_template([{'role': 'user', 'content': i}], tokenize=False, add_generation_prompt=True) for i in inputs]
print(pipe(prompts, max_length=131072, do_sample=False))

This model was trained with a synthetic dataset with DataDreamer ๐Ÿค–๐Ÿ’ค. The synthetic dataset card and model card can be found here. The training arguments can be found here.

Downloads last month
20
Safetensors
Model size
1.24B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for fineinstructions/query_templatizer

Finetuned
(392)
this model

Dataset used to train fineinstructions/query_templatizer