Original Model Card

Dolphin 2.9.3 Mistral 7b v0.3 32k 🐬

Curated and trained by Eric Hartford and Cognitive Computations

Discord Discord: https://discord.gg/cognitivecomputations

This model is based on mistralai/Mistral-7B-v0.3, and is governed by the apache 2.0 license.

The base model has 32k context, and our finetuning took place with 8192 sequence length.

Dolphin 2.9.3 uses ChatML prompt template format.

example:

<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

Usage

ollama run CognitiveComputations/dolphin-mistral-32k:7b-v2.9.3-q4_0

Supported Tags

  • dolphin-mistral-32k:7b-v2.9.3-q2_k
  • dolphin-mistral-32k:7b-v2.9.3-q3_k
  • dolphin-mistral-32k:7b-v2.9.3-q4_0
  • dolphin-mistral-32k:7b-v2.9.3-q4_k_m
  • dolphin-mistral-32k:7b-v2.9.3-q4_k_s
  • dolphin-mistral-32k:7b-v2.9.3-q5_0
  • dolphin-mistral-32k:7b-v2.9.3-q5_k_m
  • dolphin-mistral-32k:7b-v2.9.3-q5_k_s
  • dolphin-mistral-32k:7b-v2.9.3-q6_k
  • dolphin-mistral-32k:7b-v2.9.3-q8_0
Downloads last month
29
GGUF
Model size
7.25B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

4-bit

5-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Datasets used to train macadeliccc/dolphin-2.9.3-mistral-7B-32K-GGUF