Optimal `weight_block_size` for Intel AMX `amx_int8` `amx_tile`?

#17
by ubergarm - opened

在使用双路Intel Xeon NUMA节点的场景中,针对Intel(R) Xeon(R) 6980P平台,参考了关于双路Intel Xeon NUMA节点的讨论的建议,尝试采用int8量化以利用amx_tileamx_int8 CPU指令集标志。

然而,当前配置中默认的weight_block_size设置为128x128。该参数值是否同时适用于GPU和CPU推理场景?

根据《Intel® 架构指令集扩展编程参考》第40页的说明,硬件对矩阵运算的维度限制如下:

位域07-00: tmul_maxk (行/列数最大值). 数值 = 16
位域23-08: tmul_maxn (列字节数最大值). 数值 = 64

实际限制可能取决于推理引擎(如ktransformersllama.cpp等)是否会对模型权重矩阵进行分块处理。虽然可以通过实验验证,但若存在以下问题的明确结论将非常有帮助:

若已知Intel AMX CPU在推理过程中需要更小的权重分块尺寸(例如16x16),请提供指导建议。

祝好,感谢!


I have access to a dual socket Intel(R) Xeon(R) 6980P and received advice to try this int8 quant to take advantage of amx_tile and amx_int8 CPU flags from this post on Dual Intel Xeon NUMA Nodes.

However, I see the default weight_block_size is set to 128x128 in the configuration. Is this value appropriate for both GPU and CPU inferencing?

Refer to page 40 of Intel® Architecture Instruction Set Extensions Programming Reference suggests the tile max rows is 16 and tile max columns is 64 bytes:

Bits 07-00: tmul_maxk (rows or columns). Value = 16.
Bits 23-08: tmul_maxn (column bytes). Value = 64

It may depend on the inference engine e.g. ktransformers, llama.cpp, etc. if it splits the model weight tiles into smaller thread tiles etc.

I may just have to try it and see, but if anyone knows if Intel AMX CPU inference requires a smaller weight_block_size e.g. 16x16 please advise.

Cheers and thanks!

Sign up or log in to comment