Problem hosting the model using vllm
I installed the latest vllm using:
pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly --upgrade
vllm-0.8.3.dev70+g4098b722
and host the model with:
vllm serve mistralai/Mistral-Small-3.1-24B-Instruct-2503 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --limit_mm_per_prompt 'image=10' --tensor-parallel-size 2
But get error:
/usr/lib/python3/dist-packages/scipy/init.py:146: UserWarning: A NumPy version >=1.17.3 and <1.25.0 is required for this version of SciPy (detected version 1.26.4
warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}"
2025-03-27 20:57:44.533141: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0
.
2025-03-27 20:57:44.546177: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1743109064.562078 19577 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1743109064.566751 19577 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2025-03-27 20:57:44.581702: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX512F AVX512_VNNI, in other operations, rebuild TensorFlow with the appropriate compiler flags.
INFO 03-27 20:57:46 [__init__.py:239] Automatically detected platform cuda.
INFO 03-27 20:57:47 [api_server.py:1018] vLLM API server version 0.8.3.dev70+g4098b722
INFO 03-27 20:57:47 [api_server.py:1019] args: Namespace(subparser='serve', model_tag='mistralai/Mistral-Small-3.1-24B-Instruct-2503', config='', host=None, port=8000, uvicorn_log_level='info', disable_uvicorn_access_log=False, allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, enable_ssl_refresh=False, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=True, tool_call_parser='mistral', tool_parser_plugin='', model='mistralai/Mistral-Small-3.1-24B-Instruct-2503', task='auto', tokenizer=None, hf_config_path=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='mistral', trust_remote_code=False, allowed_local_media_path=None, download_dir=None, load_format='mistral', config_format='mistral', dtype='auto', kv_cache_dtype='auto', max_model_len=None, guided_decoding_backend='xgrammar', logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=2, enable_expert_parallel=False, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, prefix_caching_hash_algo='builtin', disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=None, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt={'image': 10}, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, use_tqdm_on_load=True, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_config=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', scheduler_cls='vllm.core.scheduler.Scheduler', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', worker_extension_cls='', generation_config='auto', override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, additional_config=None, enable_reasoning=False, reasoning_parser=None, disable_cascade_attn=False, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False, enable_server_load_tracking=False, dispatch_function=<function ServeSubcommand.cmd at 0x754400d34280>)
Traceback (most recent call last):
File "/home/ubuntu/.local/bin/vllm", line 8, in
sys.exit(main())
File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/cli/main.py", line 75, in main
args.dispatch_function(args)
File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/cli/serve.py", line 33, in cmd
uvloop.run(run_server(args))
File "/home/ubuntu/.local/lib/python3.10/site-packages/uvloop/init.py", line 82, in run
return loop.run_until_complete(wrapper())
File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
File "/home/ubuntu/.local/lib/python3.10/site-packages/uvloop/init.py", line 61, in wrapper
return await main
File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 1053, in run_server
async with build_async_engine_client(args) as engine_client:
File "/usr/lib/python3.10/contextlib.py", line 199, in aenter
return await anext(self.gen)
File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 145, in build_async_engine_client
async with build_async_engine_client_from_engine_args(
File "/usr/lib/python3.10/contextlib.py", line 199, in aenter
return await anext(self.gen)
File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 165, in build_async_engine_client_from_engine_args
vllm_config = engine_args.create_engine_config(usage_context=usage_context)
File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 1307, in create_engine_config
model_config = self.create_model_config()
File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 1152, in create_model_config
return ModelConfig(
File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/config.py", line 342, in init
hf_config = get_config(self.hf_config_path or self.model,
File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/transformers_utils/config.py", line 324, in get_config
config = load_params_config(model, revision, token=HF_TOKEN, **kwargs)
File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/transformers_utils/config.py", line 623, in load_params_config
assert isinstance(config_dict, dict)
AssertionError
Hi there,
I'm also facing the same issue. I would appreciate if anyone has got an explanation or a solution for that.
you have to add your HuggingFace token to the env variables (ex: export HF_TOKEN=XXXXX)
you have to add your HuggingFace token to the env variables (ex: export HF_TOKEN=XXXXX)
Hey, I tried this but doesn't work
I just tried this using the same install and server commands to success. Are you sure you can access the model locally? For instance try downloading it manually with huggingface-cli download mistralai/Mistral-Small-3.1-24B-Instruct-2503
. Make sure you do huggingface-cli login