Overview

DeepSeek developed and released the DeepSeek R1 Distill Llama 70B model, a distilled version of the Llama 70B language model. This model represents the pinnacle of the DeepSeek R1 Distill series, designed for exceptional performance in text generation, dialogue tasks, and advanced reasoning, offering unparalleled capabilities for large-scale AI applications.

The model is ideal for enterprise-grade applications, research, conversational AI, and large-scale knowledge systems, providing top-tier accuracy, safety, and efficiency.

Variants

No Variant Cortex CLI command
1 Deepseek-r1-distill-llama-70b-70b cortex run deepseek-r1-distill-llama-70b:70b

Use it with Jan (UI)

  1. Install Jan using Quickstart
  2. Use in Jan model Hub:
    cortexso/deepseek-r1-distill-llama-70b
    

Use it with Cortex (CLI)

  1. Install Cortex using Quickstart
  2. Run the model with command:
    cortex run deepseek-r1-distill-llama-70b
    

Credits

Downloads last month
233
GGUF
Model size
70.6B params
Architecture
llama

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including cortexso/deepseek-r1-distill-llama-70b