Mungert commited on
Commit
424a410
ยท
verified ยท
1 Parent(s): bdd5ba7

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -92,7 +92,7 @@ Applying all [Unsloth](https://huggingface.co/unsloth) fixes improved inference
92
 
93
  # <span id="testllm" style="color: #7F7FFF;">๐Ÿš€ Phi 4 Mini Function Calling Test!</span>
94
 
95
- If you have a minute, Iโ€™d really appreciate it if you could test my Phi-4-Mini-Instruct Demo at ๐Ÿ‘‰ [Free Network Monitor](https://readyforquantum.com).
96
  ๐Ÿ’ฌ Click the **chat icon** (bottom right of the main and dashboard pages) . Then toggle between the LLM Types Phi-4-Mini-Instruct is called TestLLM : TurboLLM -> FreeLLM -> TestLLM.
97
 
98
  ### What I'm Testing
@@ -100,7 +100,7 @@ I'm experimenting with **function calling** against my network monitoring servic
100
  ๐ŸŸก **TestLLM** โ€“ Runs **Phi-4-mini-instruct** using phi-4-mini-q4_0.gguf , llama.cpp on 6 threads of a Cpu VM (Should take about 15s to load. Inference speed is quite slow and it only processes one user prompt at a timeโ€”still working on scaling!). If you're curious, I'd be happy to share how it works! .
101
 
102
  ### The other Available AI Assistants
103
- ๐ŸŸข **TurboLLM** โ€“ Uses **gpt-4o-mini** Fast! . Note: tokens are limited since OpenAI models are pricey, but you can [Login](https://readyforquantum.com) or [Download](https://readyforquantum.com/download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) the Free Network Monitor agent to get more tokens, Alternatively use the TestLLM .
104
  ๐Ÿ”ต **HugLLM** โ€“ Runs **open-source Hugging Face models** Fast, Runs small models (โ‰ˆ8B) hence lower quality, Get 2x more tokens (subject to Hugging Face API availability)
105
 
106
 
 
92
 
93
  # <span id="testllm" style="color: #7F7FFF;">๐Ÿš€ Phi 4 Mini Function Calling Test!</span>
94
 
95
+ If you have a minute, Iโ€™d really appreciate it if you could test my Phi-4-Mini-Instruct Demo at ๐Ÿ‘‰ [Quantum Network Monitor](https://readyforquantum.com).
96
  ๐Ÿ’ฌ Click the **chat icon** (bottom right of the main and dashboard pages) . Then toggle between the LLM Types Phi-4-Mini-Instruct is called TestLLM : TurboLLM -> FreeLLM -> TestLLM.
97
 
98
  ### What I'm Testing
 
100
  ๐ŸŸก **TestLLM** โ€“ Runs **Phi-4-mini-instruct** using phi-4-mini-q4_0.gguf , llama.cpp on 6 threads of a Cpu VM (Should take about 15s to load. Inference speed is quite slow and it only processes one user prompt at a timeโ€”still working on scaling!). If you're curious, I'd be happy to share how it works! .
101
 
102
  ### The other Available AI Assistants
103
+ ๐ŸŸข **TurboLLM** โ€“ Uses **gpt-4o-mini** Fast! . Note: tokens are limited since OpenAI models are pricey, but you can [Login](https://readyforquantum.com) or [Download](https://readyforquantum.com/download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) the Quantum Network Monitor agent to get more tokens, Alternatively use the TestLLM .
104
  ๐Ÿ”ต **HugLLM** โ€“ Runs **open-source Hugging Face models** Fast, Runs small models (โ‰ˆ8B) hence lower quality, Get 2x more tokens (subject to Hugging Face API availability)
105
 
106