CorticalStack/mistral-7b-openhermes-2.5-gptq
CorticalStack/mistral-7b-openhermes-2.5-gptq is an GPTQ quantised version of CorticalStack/mistral-7b-openhermes-2.5-sft.
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). MacOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- Downloads last month
- 11
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support