Intern-S1-mini-GGUF Model
👋 join us on Discord and WeChat
Introduction
The Intern-S1-mini
model in GGUF format can be utilized by llama.cpp, a highly popular open-source framework for Large Language Model (LLM) inference, across a variety of hardware platforms, both locally and in the cloud.
This repository offers Intern-S1-mini
models in GGUF format in both half precision and various low-bit quantized versions, including q8_0
.
In the subsequent sections, we will first present the installation procedure, followed by an explanation of the model download process. And finally we will illustrate the methods for model inference and service deployment through specific examples.
Installation
We recommend building llama.cpp
from source. The following code snippet provides an example for the Linux CUDA platform. For instructions on other platforms, please refer to the official guide.
- Step 1: create a conda environment and install cmake
conda create --name interns1 python=3.10 -y
conda activate interns1
pip install cmake
- Step 2: clone the source code and build the project
git clone --depth=1 https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release -j
All the built targets can be found in the sub directory build/bin
In the following sections, we assume that the working directory is at the root directory of llama.cpp
.
Download models
In the introduction section, we mentioned that this repository includes several models with varying levels of computational precision. You can download the appropriate model based on your requirements. For instance, fp16 gguf files can be downloaded as below:
pip install huggingface-hub
huggingface-cli download internlm/Intern-S1-mini-GGUF --include *-f16.gguf --local-dir Intern-S1-mini-GGUF --local-dir-use-symlinks False
Inference
You can use build/bin/llama-mtmd-cli
for conducting inference. For a detailed explanation of build/bin/llama-mtmd-cli
, please refer to this guide
chat example
Here is an example of using the thinking system prompt.
system_prompt="<|im_start|>system\nYou are an expert reasoner with extensive experience in all areas. You approach problems through systematic thinking and rigorous reasoning. Your response should reflect deep understanding and precise logical thinking, making your solution path and reasoning clear to others. Please put your thinking process within <think>...</think> tags.\n<|im_end|>\n"
build/bin/llama-mtmd-cli \
--model Intern-S1-mini-GGUF/f16/Intern-S1-mini-f16.gguf \
--mmproj Intern-S1-mini-GGUF/f16/mmproj-Intern-S1-mini-f16.gguf \
--predict 2048 \
--ctx-size 8192 \
--gpu-layers 100 \
--temp 0.8 \
--top-p 0.8 \
--top-k 50 \
--seed 1024
Then input your question with image input as /image xxx.jpg
.
Serving
llama.cpp
provides an OpenAI API compatible server - llama-server
. You can deploy the model as a service like this:
./build/bin/llama-server \
--model Intern-S1-mini-GGUF/f16/Intern-S1-mini-f16.gguf \
--mmproj Intern-S1-mini-GGUF/f16/mmproj-Intern-S1-mini-f16.gguf \
--gpu-layers 100 \
--temp 0.8 \
--top-p 0.8 \
--top-k 50 \
--port 8080 \
--seed 1024
At the client side, you can access the service through OpenAI API:
from openai import OpenAI
client = OpenAI(
api_key='YOUR_API_KEY',
base_url='http://localhost:8080/v1'
)
model_name = client.models.list().data[0].id
response = client.chat.completions.create(
model=model_name,
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": " provide three suggestions about time management"},
],
temperature=0.8,
top_p=0.8
)
print(response)
Ollama
# install ollama
curl -fsSL https://ollama.com/install.sh | sh
# fetch model
ollama pull internlm/interns1:mini
# run model
ollama run internlm/interns1:mini
# then use openai client to call on http://localhost:11434/v1
- Downloads last month
- 451
8-bit
16-bit
Model tree for internlm/Intern-S1-mini-GGUF
Base model
internlm/Intern-S1-mini