benhaotang commited on
Commit
21ed0e4
·
verified ·
1 Parent(s): 79fd668

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -1
README.md CHANGED
@@ -22,9 +22,38 @@ This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge m
22
  ### Models Merged
23
 
24
  The following models were included in the merge:
25
- * [benhaotang/Phi-4-llama-t1-full](https://huggingface.co/benhaotang/Phi-4-llama-t1-full) but actually [win10/Phi-4-llama-t1-lora](https://huggingface.co/win10/Phi-4-llama-t1-lora), this is where you should thank.
26
  * [prithivMLmods/Phi-4-QwQ](https://huggingface.co/prithivMLmods/Phi-4-QwQ)
27
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
  ### Configuration
29
 
30
  The following YAML configuration was used to produce this model:
 
22
  ### Models Merged
23
 
24
  The following models were included in the merge:
25
+ * [benhaotang/Phi-4-llama-t1-full](https://huggingface.co/benhaotang/Phi-4-llama-t1-full) but actually [win10/Phi-4-llama-t1-lora](https://huggingface.co/win10/Phi-4-llama-t1-lora), this is who and where you should really thank.
26
  * [prithivMLmods/Phi-4-QwQ](https://huggingface.co/prithivMLmods/Phi-4-QwQ)
27
 
28
+ ### Running
29
+
30
+ - With Ollama
31
+
32
+ ```
33
+ ollama run hf.co/benhaotang/phi4-qwq-sky-t1-Q4_K_M-GGUF
34
+ ```
35
+
36
+ I suggest adding `SYSTEM "You are a helpful AI asistent. You always think step by step."` to triger step by step reasoning.
37
+
38
+ - With pytorch
39
+
40
+ ```
41
+ import transformers
42
+ tokenizer = AutoTokenizer.from_pretrained("mircosoft/phi-4")
43
+ pipeline = transformers.pipeline(
44
+ "text-generation",
45
+ model="benhaotang/phi4-qwq-sky-t1",
46
+ tokenizer=tokenizer,
47
+ device_map="auto",
48
+ )
49
+ messages = [
50
+ {"role": "system", "content": "You are a helpful AI asistent. You always think step by step."},
51
+ {"role": "user", "content": "Give me a short intodcution to renormalization group(RG) flow in physcis?"},
52
+ ]
53
+ outputs = pipeline(messages, max_new_tokens=128)
54
+ print(outputs[0]["generated_text"])
55
+ ```
56
+
57
  ### Configuration
58
 
59
  The following YAML configuration was used to produce this model: