vllm推理32k-128k输入
#17 opened 3 days ago
by
luckyZhangHu
official finetune example?
#16 opened 9 days ago
by
erichartford

Anyone pls let me know what hardware can run 72B ?
2
#15 opened 13 days ago
by
haoyiharrison

Fix model tree (remove loop)
#14 opened 13 days ago
by
hekmon
batch inference error
1
#13 opened 17 days ago
by
404dreamer
Error in preprocessing prompt inputs
#12 opened 18 days ago
by
darvec
cannot import name 'Qwen2_5_VLImageProcessor' (on vLLM)
4
#11 opened 21 days ago
by
cbrug
Update preprocessor_config.json
#10 opened 24 days ago
by
Isotr0py

Hardware Requirements
#9 opened 25 days ago
by
shreyas0985
Vision tokens missing from chat template
#8 opened 26 days ago
by
depasquale

ERROR:hf-to-gguf:Model Qwen2_5_VLForConditionalGeneration is not supported
#7 opened 28 days ago
by
li-gz
docs(readme): fix typo in README.md
#6 opened about 1 month ago
by
BjornMelin

Out of Memory on two H100 (80GB) each and load_in_8_bit = True
#4 opened about 1 month ago
by
Maverick17

Model Memory Requirements
2
#3 opened about 1 month ago
by
nvip1204
Video Inference - TypeError: process_vision_info() got an unexpected keyword argument 'return_video_kwargs'
2
#2 opened about 1 month ago
by
hmanju
Qwen/Qwen2.5-VL-72B-Instruct-AWQ and Qwen/Qwen2.5-VL-40<B-Instruct-AWQ please
6
#1 opened about 1 month ago
by
devops724