ATTENTION: Not sure why the files are flagged as unsafe. This is the first time I get this. You can press the button with an arrow right beside the file name to see the Jinja templates yourself; it's the same as the original templates. If this concerns you, don't download! I will remove this repo after a few days if these quants are still flagged as unsafe.

Update - Other quants from other authors get flagged as well:
https://huggingface.co/mradermacher/c4ai-command-r7b-12-2024-i1-GGUF
https://huggingface.co/mmnga/c4ai-command-r7b-12-2024-gguf
Some are not flagged (yet) because the files are being queued to be scanned (bartowski's quants for example).
Anyway, maybe it's a false positive, but just to be sure, don't download if you feel like it's a risk.

Quantizations of https://huggingface.co/CohereForAI/c4ai-command-r7b-12-2024

Note: you will need llama.cpp b4415 or later to run the model.

Inference Clients/UIs


From original readme

C4AI Command R7B is an open weights research release of a 7B billion parameter model with advanced capabilities optimized for a variety of use cases including reasoning, summarization, question answering, and code. The model is trained to perform sophisticated tasks including Retrieval Augmented Generation (RAG) and tool use. The model also has powerful agentic capabilities with the ability to use and combine multiple tools over multiple steps to accomplish more difficult tasks. It obtains top performance on enterprise relevant code use cases. C4AI Command R7B is a multilingual model trained on 23 languages.

Developed by: Cohere and Cohere For AI

Try C4AI Command R7B

You can try out C4AI Command R7B before downloading the weights in our hosted Hugging Face Space.

Usage

Please install transformers from the source repository that includes the necessary changes for this model.

# pip install 'git+https://github.com/huggingface/transformers.git'
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "CohereForAI/c4ai-command-r7b-12-2024"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Format message with the c4ai-command-r7b-12-2024 chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")

gen_tokens = model.generate(
    input_ids,
    max_new_tokens=100,
    do_sample=True,
    temperature=0.3,
)

gen_text = tokenizer.decode(gen_tokens[0], skip_special_tokens=True)
print(gen_text)
Downloads last month
929
GGUF
Model size
8.03B params
Architecture
cohere2

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.