--- license: other language: - en pipeline_tag: text-generation inference: false tags: - transformers - gguf - imatrix - cogito-v1-preview-llama-3B --- Quantizations of https://huggingface.co/deepcogito/cogito-v1-preview-llama-3B ### Open source inference clients/UIs * [llama.cpp](https://github.com/ggerganov/llama.cpp) * [KoboldCPP](https://github.com/LostRuins/koboldcpp) * [ollama](https://github.com/ollama/ollama) * [text-generation-webui](https://github.com/oobabooga/text-generation-webui) * [jan](https://github.com/janhq/jan) * [GPT4All](https://github.com/nomic-ai/gpt4all) ### Closed source inference clients/UIs * [LM Studio](https://lmstudio.ai/) * [Backyard AI](https://backyard.ai/) * More will be added... --- # From original readme The Cogito LLMs are instruction tuned generative models (text in/text out). All models are released under an open license for commercial use. - Cogito models are hybrid reasoning models. Each model can answer directly (standard LLM), or self-reflect before answering (like reasoning models). - The LLMs are trained using **Iterated Distillation and Amplification (IDA)** - an scalable and efficient alignment strategy for superintelligence using iterative self-improvement. - The models have been optimized for coding, STEM, instruction following and general helpfulness, and have significantly higher multilingual, coding and tool calling capabilities than size equivalent counterparts. - In both standard and reasoning modes, Cogito v1-preview models outperform their size equivalent counterparts on common industry benchmarks. - Each model is trained in over 30 languages and supports a context length of 128k. # Usage Here is a snippet below for usage with Transformers: ```python import transformers import torch model_id = "deepcogito/cogito-v1-preview-llama-3B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Give me a short introduction to LLMs."}, ] outputs = pipeline( messages, max_new_tokens=512, ) print(outputs[0]["generated_text"][-1]) ``` ## Implementing extended thinking - By default, the model will answer in the standard mode. - To enable thinking, you can do any one of the two methods: - Add a specific system prompt, or - Set `enable_thinking=True` while applying the chat template. > **_NOTE:_** For the Cogito 3B model, we suggest using `repetition_penalty=1.1` while implementing extended thinking. ### Method 1 - Add a specific system prompt. To enable thinking, simply use this in the system prompt `system_instruction = 'Enable deep thinking subroutine.'` If you already have a system_instruction, then use `system_instruction = 'Enable deep thinking subroutine.' + '\n\n' + system_instruction`. Here is an example - ```python import transformers import torch model_id = "deepcogito/cogito-v1-preview-llama-3B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) DEEP_THINKING_INSTRUCTION = "Enable deep thinking subroutine." messages = [ {"role": "system", "content": DEEP_THINKING_INSTRUCTION}, {"role": "user", "content": "Write a bash script that takes a matrix represented as a string with format '[1,2],[3,4],[5,6]' and prints the transpose in the same format."}, ] outputs = pipeline( messages, max_new_tokens=512, ) print(outputs[0]["generated_text"][-1]) ``` Similarly, if you have a system prompt, you can append the `DEEP_THINKING_INSTRUCTION` to the beginning in this way - ```python DEEP_THINKING_INSTRUCTION = "Enable deep thinking subroutine." system_prompt = "Reply to each prompt with only the actual code - no explanations." prompt = "Write a bash script that takes a matrix represented as a string with format '[1,2],[3,4],[5,6]' and prints the transpose in the same format." messages = [ {"role": "system", "content": DEEP_THINKING_INSTRUCTION + '\n\n' + system_prompt}, {"role": "user", "content": prompt} ] ``` ### Method 2 - Set enable_thinking=True in the tokenizer If you are using Huggingface tokenizers, then you can simply use add the argument `enable_thinking=True` to the tokenization (this option is added to the chat template). Here is an example - ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "deepcogito/cogito-v1-preview-llama-3B" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Give me a short introduction to LLMs." messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` # Tool Calling Cogito models support tool calling (single, parallel, multiple and parallel_multiple) both in standard and extended thinking mode. Here is a snippet - ```python # First, define a tool def get_current_temperature(location: str) -> float: """ Get the current temperature at a location. Args: location: The location to get the temperature for, in the format "City, Country" Returns: The current temperature at the specified location in the specified units, as a float. """ return 22. # A real function should probably actually get the temperature! # Next, create a chat and apply the chat template messages = [ {"role": "user", "content": "Hey, what's the temperature in Paris right now?"} ] model_inputs = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True) text = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True, tokenize=False) inputs = tokenizer(text, return_tensors="pt", add_special_tokens=False).to(model.device) outputs = model.generate(**inputs, max_new_tokens=512) output_text = tokenizer.batch_decode(outputs)[0][len(text):] print(output_text) ``` This will result in the output - ``` {"name": "get_current_temperature", "arguments": {"location": "Paris, France"}} <|eot_id|> ``` You can then generate text from this input as normal. If the model generates a tool call, you should add it to the chat like so: ```python tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France"}} messages.append({"role": "assistant", "tool_calls": [{"type": "function", "function": tool_call}]}) ``` and then call the tool and append the result, with the `tool` role, like so: ```python messages.append({"role": "tool", "name": "get_current_temperature", "content": "22.0"}) ``` After that, you can `generate()` again to let the model use the tool result in the chat: ```python text = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True, tokenize=False) inputs = tokenizer(text, return_tensors="pt", add_special_tokens=False).to(model.device) outputs = model.generate(**inputs, max_new_tokens=512) output_text = tokenizer.batch_decode(outputs)[0][len(text):] ``` This should result in the string - ``` 'The current temperature in Paris is 22.0 degrees.<|eot_id|>' ```