Model Details

This model is an mixed int4 model with group_size 128 and symmetric quantization of moonshotai/Kimi-K2-Instruct generated by intel/auto-round algorithm. Non expert layers are fallback to 8 bits. Please refer to Section Generate the model for more details. Please follow the license of the original model.

How To Use

INT4 Inference(CPU)

from transformers import AutoModelForCausalLM, AutoTokenizer
import transformers
from auto_round import AutoRound, AutoRoundConfig

import torch

quantized_model_dir = "Intel/Kimi-K2-Instruct-int4-mixed-AutoRound-cpu"

model = AutoModelForCausalLM.from_pretrained(
    quantized_model_dir,
    torch_dtype=torch.bfloat16,
    device_map="cpu",
)
tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, trust_remote_code=True)
prompts = [
    "9.11和9.8哪个数字大",
    "strawberry中有几个r?",
    "There is a girl who likes adventure,",
    "Please give a brief introduction of Moonshot AI",
]

texts=[]
for prompt in prompts:
    messages = [
        {"role": "system", "content": "You are Kimi, an AI assistant created by Moonshot AI."},
        {"role": "user", "content": [{"type": "text", "text":prompt}]}
    ]
    text = tokenizer.apply_chat_template(
        messages,
        tokenize=False,
        add_generation_prompt=True
    )
    texts.append(text)
inputs = tokenizer(texts, return_tensors="pt", padding=True, truncation=True)

outputs = model.generate(
    input_ids=inputs["input_ids"].to(model.device),
    attention_mask=inputs["attention_mask"].to(model.device),
    max_length=200, ##change this to align with the official usage
    num_return_sequences=1,
    do_sample=False  ##change this to align with the official usage
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs["input_ids"], outputs)
]

decoded_outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)

for i, prompt in enumerate(prompts):
    input_id = inputs
    print(f"Prompt: {prompt}")
    print(f"Generated: {decoded_outputs[i]}")
    print("-" * 50)
"""
Prompt: 9.11和9.8哪个数字大
Generated: ### 第一步:理解题目

首先,我需要明确题目在问什么。题目给出了两个数字:9.11和9.8,问哪一个更大。这看起来是一个简单的数值比较问题。

### 第二步:数字的表示

这两个数字都是小数,即带有小数部分的数字。小数由整数部分和小数部分组成,小数点左边是整数部分,右边是小数部分。

- 9.11:整数部分是9,小数部分是11。
- 9.8:整数部分是9,小数部分是8。

### 第三步:比较整数部分

首先比较两个数的整数部分:

- 9.11的整数部分是9。
- 9.8的整数部分也是9。

整数部分相同,因此需要比较小数部分。

### 第四步:比较小数部分

小数部分的比较
--------------------------------------------------
Prompt: strawberry中有几个r?
Generated: ### 问题重述
我们需要计算单词 "strawberry" 中有多少个字母 "r"。

### 步骤分解
1. **写出单词**:首先,将单词 "strawberry" 完整地写出来。
2. **逐个字母检查**:从左到右,逐个字母查看是否是 "r"(注意大小写,但这里都是小写)。
3. **计数**:每遇到一个 "r",就增加计数器。

### 详细检查
让我们将 "strawberry" 拆分开来:

字母位置及字母:
1. s
2. t
3. r
4. a
5. w
6. b
7. e
8. r
9. r
10. y

现在,我们检查每个字母是否为 "r":

-
--------------------------------------------------
Prompt: There is a girl who likes adventure,
Generated: There is a girl who likes adventure,
so she ties her shoes with sunrise instead of laces,
lets the wind pick the next city,
and trades her shadow for a passport stamp.

She keeps her memories in mason jars—
one holds the scent of monsoon in Mumbai,
another the hush of Icelandic snow.
When homesick, she unscrews a lid,
inhales, and is gone again.

She once outran her own name
somewhere between Marrakesh and the moon,
answering only to “Hey, you with the constellations in your hair.”
Maps are her love letters;
she folds them into paper boats
and sails them down hotel bathtubs,
whispering, *Find me where the water ends.*
--------------------------------------------------
Prompt: Please give a brief introduction of Moonshot AI
Generated: Moonshot AI is a Chinese artificial-intelligence company founded in 2023 and headquartered in Beijing. Focused on large-scale language models and related products, it released its first model, Kimi, in October 2023 and has since launched upgraded versions such as Kimi 1.5. The company closed a US$1 billion funding round in early 2024 that valued it at about US$2.5 billion, making it one of China’s best-funded AI start-ups.
--------------------------------------------------

"""

Generate the model

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
import transformers

model_name = "Kimi-K2-Instruct-BF16"

tokenizer = AutoTokenizer.from_pretrained(model_name,trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name,device_map="cpu", torch_dtype="auto",trust_remote_code=True)

layer_config = {}
for n, m in model.named_modules():
    if isinstance(m, torch.nn.Linear):
        if "expert" in n or "shared_experts" in n:
            layer_config[n] = {"bits": 4}
            print(n, 4)
        else:
            layer_config[n] = {"bits": 8}
            print(n, 8)

from auto_round import AutoRound

autoround = AutoRound(model, tokenizer, iters=0, layer_config=layer_config)
autoround.quantize_and_save(format="auto_round", output_dir="tmp_autoround")

Ethical Considerations and Limitations

The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.

Therefore, before deploying any applications of the model, developers should perform safety testing.

Caveats and Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

Here are a couple of useful links to learn more about Intel's AI software:

  • Intel Neural Compressor link

Disclaimer

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.

Cite

@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }

arxiv github

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Intel/Kimi-K2-Instruct-int4-mixed-AutoRound

Quantized
(14)
this model

Dataset used to train Intel/Kimi-K2-Instruct-int4-mixed-AutoRound