Notice
This is a transformers-compatible llava-critic-7b
model converted from lmms-lab/llava-critic-7b using convert_llava_onevision_weights_to_hf.py.
However, there may be some precision problems to be fixed. See #34467 for details.
Requirements for vLLM
The latest vLLM (0.6.3.post1) has an sever bug when serving models, See vllm #9848 for details.
I recommend using vllm==0.6.2
to avoid this issue.
- Downloads last month
- 258
Inference API (serverless) does not yet support transformers models for this pipeline type.