This project is sponsored by PrimeLine

Please Use V4 of this model instead

Model Card

This model is an finetuned version for german instructions and conversations in style of Open Assistant tokens. "<|prompter|>" "<|endoftext|>" "<|assistant|>"

The dataset used is deduplicated and cleaned, with no codes inside. The focus is on instruction following and conversational tasks.

The model archictecture is based on Llama version 2 with 13B parameters, trained on 100% renewable energy powered hardware.

This work is contributed by private research of flozi00

Join discussions about german llm research, and plan larger training runs together: https://join.slack.com/t/slack-dtc7771/shared_invite/zt-219keplqu-hLwjm0xcFAOX7enERfBz0Q

Downloads last month
9
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Dataset used to train flozi00/Llama-2-13B-german-assistant-v3