--- license: mit pipeline_tag: text-generation tags: - cortex.cpp --- ## Overview **DeepSeek** developed and released the [DeepSeek R1 Distill Llama 8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) model, a distilled version of the Llama 8B language model. This variant is fine-tuned for high-performance text generation, optimized for dialogue, and tailored for information-seeking tasks. It offers a robust balance between model size and performance, making it suitable for demanding conversational AI and research use cases. The model is designed to deliver accurate, efficient, and safe responses in applications such as customer support, knowledge systems, and research environments. ## Variants | No | Variant | Cortex CLI command | | --- | --- | --- | | 1 | [Deepseek-r1-distill-llama-8b-8b](https://huggingface.co/cortexso/deepseek-r1-distill-llama-8b/tree/8b) | `cortex run deepseek-r1-distill-llama-8b:8b` | ## Use it with Jan (UI) 1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart) 2. Use in Jan model Hub: ```bash cortexso/deepseek-r1-distill-llama-8b ``` ## Use it with Cortex (CLI) 1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart) 2. Run the model with command: ```bash cortex run deepseek-r1-distill-llama-8b ``` ## Credits - **Author:** DeepSeek - **Converter:** [Homebrew](https://www.homebrew.ltd/) - **Original License:** [License](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B#7-license) - **Papers:** [DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning](https://arxiv.org/html/2501.12948v1)