File size: 824 Bytes
69a2891 1d73c08 69a2891 1d73c08 69a2891 1d73c08 03fdaf1 1d73c08 03fdaf1 1d73c08 69a2891 1d73c08 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
---
datasets:
- flozi00/conversations
language:
- de
---
## This project is sponsored by [ ![PrimeLine](https://www.primeline-solutions.com/skin/frontend/default/theme566/images/primeline-solutions-logo.png) ](https://www.primeline-solutions.com/de/server/nach-einsatzzweck/gpu-rendering-hpc/)
# Model Card
This model is an finetuned version for german instructions and conversations in style of Alpaca. "### Assistant:" "### User:", trained with a context length of 8k tokens.
The dataset used is deduplicated and cleaned, with no codes inside. The focus is on instruction following and conversational tasks.
The model archictecture is based on Mistral v0.1 with 7B parameters, trained on 100% renewable energy powered hardware.
This work is contributed by private research of [flozi00](https://huggingface.co/flozi00) |