Reka Flash 3.1 (3.5 bit)

This repository corresponds to the quantized version of Reka Flash 3.1. It has been quantized using our Reka Quant method, which leverages calibrated error reduction and online self-distillation to reduce quantization loss. The GGUF corresponds to Q3_K_S quantization.

You can find the half-precision version here, and the Reka Quant quantization library here

Learn more about our quantization technology.

Quick Start

Reka Flash 3.1 Quantized is released in a llama.cpp-compatible Q3_K_S format. You may use any library compatible with GGUF to run the model.

Via llama.cpp

./llama-cli -hf rekaai/reka-flash-3.1-rekaquant-q3_k_s -p "Who are you?"

Model Details

Prompt Format

Reka Flash 3.1 uses cl100k_base tokenizer and adds no additional special tokens. Its prompt format is as follows:

human: this is round 1 prompt <sep> assistant: this is round 1 response <sep> ...

Generation should stop on seeing the string <sep> or seeing the special token <|endoftext|>. System prompt can be added by prepending to the first user round.

human: You are a friendly assistant blah ... this is round 1 user prompt <sep> assistant: this is round 1 response <sep> ...

For multi-round conversations, it is recommended to drop the reasoning traces in the previous assistant round to save tokens for the model to think. If you are using HF or vLLM, the built-in chat_template shall handle prompt formatting automatically.

Language Support

This model is primarily built for the English language, and you should consider this an English only model. However, the model is able to converse and understand other languages to some degree.

Downloads last month
596
GGUF
Model size
21B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

3-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support