--- license: apache-2.0 datasets: - NousResearch/Hermes-3-Dataset - HuggingFaceTB/everyday-conversations-llama3.1-2k base_model: - Qwen/Qwen3-4B --- This Qwen 3 4B model was fine-tuned on the Hermes 3 dataset to enhance its general chatting capabilities while retaining Qwen's Reasoning capabilities. ## transformers As the qwen team suggested to use ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "ertghiu256/Qwen3-Hermes-4b" # load the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) # prepare the model input prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # Switches between thinking and non-thinking modes. Default is True. ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # conduct text completion generated_ids = model.generate( **model_inputs, max_new_tokens=32768 ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() # parsing thinking content try: # rindex finding 151668 () index = len(output_ids) - output_ids[::-1].index(151668) except ValueError: index = 0 thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n") content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n") print("thinking content:", thinking_content) print("content:", content) ``` ## vllm Run this command ```bash vllm serve ertghiu256/Qwen3-Hermes-4b --enable-reasoning --reasoning-parser deepseek_r1 ``` ## Sglang Run this command ```bash python -m sglang.launch_server --model-path ertghiu256/Qwen3-Hermes-4b --reasoning-parser deepseek-r1 ``` ## llama.cpp Run this command ```bash llama-server --hf-repo ertghiu256/Qwen3-Hermes-4b ``` or ```bash llama-cli --hf ertghiu256/Qwen3-Hermes-4b ``` ## ollama Run this command ```bash ollama run hf.co/ertghiu256/Qwen3-Hermes-4b:Q4_K_M ``` ## lm studio Search ``` ertghiu256/Qwen3-Hermes-4b ``` in the lm studio model search list then download