Description

Fine-tuned Locutusque/TinyMistral-248M-v2 on the HuggingFaceH4/ultrachat_200k dataset.

Recommended inference parameters

do_sample: true
temperature: 0.1
top_p: 0.14
top_k: 12
repetition_penalty: 1.1

Recommended prompt template

<|im_start|>user\n{user message}<|im_end|>\n<|im_start|>assistant\n{assistant message}<|endoftext|>

Evaluation

This model will be submitted to the Open LLM Leaderboard.

Downloads last month
10
Safetensors
Model size
248M params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Locutusque/TinyMistral-248M-v2-Instruct

Finetuned
(1)
this model
Merges
2 models
Quantizations
3 models

Dataset used to train Locutusque/TinyMistral-248M-v2-Instruct