Mixtral-6x7B-Instruct-v0.1 (bfloat16)

The Mixtral-6x7B-Instruct-v0.1 model is a derivative of the mistralai/Mixtral-8x7B-Instruct-v0.1 model. It was created by selectively trimming the original model and retaining only the 0th, 2nd, 4th, 5th, 6th, and 7th experts from each layer.

The trimming process was facilitated by the Mixtral-Expert-Trimmer tool, developed specifically for this purpose.

The model is still in testing phase. It is not clear whether it works.

License

The Mixtral-6x7B-Instruct-v0.1 model is open-source and licensed under the Apache 2.0 License. For more information, please refer to the LICENSE file.

Feeling Generous? 😊

Eager to buy me a cup of 2$ coffe or iced tea?πŸ΅β˜• Sure, here is the link: https://ko-fi.com/drnicefellow. Please add a note on which one you want me to drink?

Downloads last month
64
Safetensors
Model size
35.4B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for DrNicefellow/Mixtral-6x7B-Instruct-v0.1-bfloat16-Trimmed024567

Quantizations
2 models

Collection including DrNicefellow/Mixtral-6x7B-Instruct-v0.1-bfloat16-Trimmed024567