Mungert commited on
Commit
da3d386
Β·
verified Β·
1 Parent(s): 31bd6a2

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -207,7 +207,7 @@ These models are optimized for **extreme memory efficiency**, making them ideal
207
  # <span id="testllm" style="color: #7F7FFF;">πŸš€ If you find these models useful</span>
208
  ❀ **Please click "Like" if you find this useful!**
209
  Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**:
210
- πŸ‘‰ [Free Network Monitor](https://readyforquantum.com)
211
 
212
  πŸ’¬ **How to test**:
213
  1. Click the **chat icon** (bottom right on any page)
@@ -233,7 +233,7 @@ I’m pushing the limits of **small open-source models for AI network monitoring
233
  🟒 **TurboLLM** – Uses **gpt-4-mini** for:
234
  - **Real-time network diagnostics**
235
  - **Automated penetration testing** (Nmap/Metasploit)
236
- - πŸ”‘ Get more tokens by [downloading our Free Network Monitor Agent](https://readyforquantum.com/download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
237
 
238
  πŸ”΅ **HugLLM** – Open-source models (β‰ˆ8B params):
239
  - **2x more tokens** than TurboLLM
@@ -244,10 +244,10 @@ I’m pushing the limits of **small open-source models for AI network monitoring
244
  1. `"Give me info on my websites SSL certificate"`
245
  2. `"Check if my server is using quantum safe encyption for communication"`
246
  3. `"Run a quick Nmap vulnerability test"`
247
- 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Free Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution!
248
 
249
  ### Final word
250
- I fund the servers to create the models files, run the Free Network Monitor Service and Pay for Inference from Novita and OpenAI all from my own pocket. All of the code for creating the models and the work I have done with Free Network Monitor is [open source](https://github.com/Mungert69). Feel free to use what you find useful. Please support my work and consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) .
251
  This will help me pay for the services and increase the token limits for everyone.
252
 
253
  Thank you :)
 
207
  # <span id="testllm" style="color: #7F7FFF;">πŸš€ If you find these models useful</span>
208
  ❀ **Please click "Like" if you find this useful!**
209
  Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**:
210
+ πŸ‘‰ [Quantum Network Monitor](https://readyforquantum.com)
211
 
212
  πŸ’¬ **How to test**:
213
  1. Click the **chat icon** (bottom right on any page)
 
233
  🟒 **TurboLLM** – Uses **gpt-4-mini** for:
234
  - **Real-time network diagnostics**
235
  - **Automated penetration testing** (Nmap/Metasploit)
236
+ - πŸ”‘ Get more tokens by [downloading our Quantum Network Monitor Agent](https://readyforquantum.com/download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
237
 
238
  πŸ”΅ **HugLLM** – Open-source models (β‰ˆ8B params):
239
  - **2x more tokens** than TurboLLM
 
244
  1. `"Give me info on my websites SSL certificate"`
245
  2. `"Check if my server is using quantum safe encyption for communication"`
246
  3. `"Run a quick Nmap vulnerability test"`
247
+ 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution!
248
 
249
  ### Final word
250
+ I fund the servers to create the models files, run the Quantum Network Monitor Service and Pay for Inference from Novita and OpenAI all from my own pocket. All of the code for creating the models and the work I have done with Quantum Network Monitor is [open source](https://github.com/Mungert69). Feel free to use what you find useful. Please support my work and consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) .
251
  This will help me pay for the services and increase the token limits for everyone.
252
 
253
  Thank you :)