Overview

DeepSeek developed and released the DeepSeek R1 Distill Llama 8B model, a distilled version of the Llama 8B language model. This variant is fine-tuned for high-performance text generation, optimized for dialogue, and tailored for information-seeking tasks. It offers a robust balance between model size and performance, making it suitable for demanding conversational AI and research use cases.

The model is designed to deliver accurate, efficient, and safe responses in applications such as customer support, knowledge systems, and research environments.

Variants

No Variant Cortex CLI command
1 Deepseek-r1-distill-llama-8b-8b cortex run deepseek-r1-distill-llama-8b:8b

Use it with Jan (UI)

  1. Install Jan using Quickstart
  2. Use in Jan model Hub:
    cortexso/deepseek-r1-distill-llama-8b
    

Use it with Cortex (CLI)

  1. Install Cortex using Quickstart
  2. Run the model with command:
    cortex run deepseek-r1-distill-llama-8b
    

Credits

Downloads last month
635
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including cortexso/deepseek-r1-distill-llama-8b