Instructions to use webai-community/ai-models with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use webai-community/ai-models with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="webai-community/ai-models", filename="CodeLlama-7b-Instruct-hf/gguf/codellama-7b-instruct.Q4_K_M.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use webai-community/ai-models with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf webai-community/ai-models:Q4_K_M # Run inference directly in the terminal: llama-cli -hf webai-community/ai-models:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf webai-community/ai-models:Q4_K_M # Run inference directly in the terminal: llama-cli -hf webai-community/ai-models:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf webai-community/ai-models:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf webai-community/ai-models:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf webai-community/ai-models:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf webai-community/ai-models:Q4_K_M
Use Docker
docker model run hf.co/webai-community/ai-models:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use webai-community/ai-models with Ollama:
ollama run hf.co/webai-community/ai-models:Q4_K_M
- Unsloth Studio new
How to use webai-community/ai-models with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for webai-community/ai-models to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for webai-community/ai-models to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for webai-community/ai-models to start chatting
- Docker Model Runner
How to use webai-community/ai-models with Docker Model Runner:
docker model run hf.co/webai-community/ai-models:Q4_K_M
- Lemonade
How to use webai-community/ai-models with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull webai-community/ai-models:Q4_K_M
Run and chat with the model
lemonade run user.ai-models-Q4_K_M
List all available models
lemonade list
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Download
Download a specific WebGPU model:
huggingface-cli download webai-community/ai-models --include "ai-models/<MODEL_NAME>/onnx-webgpu/*" --local-dir .
Download all WebGPU models:
huggingface-cli download webai-community/ai-models --include "*/onnx-webgpu/*" --local-dir .
Model List
| model name | params size | gguf model | ort webgpu model | model info |
|---|---|---|---|---|
| Phi-4-mini-instruct | 3.8B | gguf | onnx-webgpu | README |
| Phi-4-mini-reasoning | 3.8B | gguf | onnx-webgpu | README |
| Phi-4-multimodal-instruct | 6B | onnx-webgpu | README | |
| Phi-3.5-mini-instruct | 3.8B | gguf | onnx-webgpu | README |
| Phi-3-mini-4k-instruct | 3.8B | gguf | onnx-webgpu | README |
| Phi-3-mini-128k-instruct | 3.8B | gguf | onnx-webgpu | README |
| Qwen3-0.6B | 0.6B | gguf | onnx-webgpu | README |
| Qwen3-1.7B | 1.7B | gguf | onnx-webgpu | README |
| Qwen3-4B | 4B | gguf | onnx-webgpu | README |
| Qwen3-8B | 8B | gguf | onnx-webgpu | README |
| Qwen2.5-0.5B-Instruct | 0.5B | gguf | onnx-webgpu | README |
| Qwen2.5-1.5B-Instruct | 1.5B | gguf | onnx-webgpu | README |
| Qwen2.5-3B-Instruct | 3B | gguf | onnx-webgpu | README |
| Qwen2.5-7B-Instruct | 7B | gguf | onnx-webgpu | README |
| Qwen2-0.5B-Instruct | 0.5B | gguf | onnx-webgpu | README |
| Qwen2-1.5B-Instruct | 1.5B | gguf | onnx-webgpu | README |
| Qwen2-7B-Instruct | 7B | gguf | onnx-webgpu | README |
| DeepSeek-R1-Distill-Qwen-1.5B | 1.5B | gguf | onnx-webgpu | README |
| DeepSeek-R1-Distill-Qwen-7B | 7B | gguf | onnx-webgpu | README |
| DeepSeek-R1-Distill-Llama-8B | 8B | gguf | onnx-webgpu | README |
| DeepSeek-R1-0528-Qwen3-8B | 8B | gguf | onnx-webgpu | README |
| gemma-3-1b-it | 1B | gguf | onnx-webgpu | README |
| gemma-2-2b-it | 2B | gguf | onnx-webgpu | README |
| gemma-2-9b-it | 9B | gguf | onnx-webgpu | README |
| gemma-2b-it | 2B | gguf | onnx-webgpu | README |
| gemma-7b-it | 7B | gguf | onnx-webgpu | README |
| internlm2_5-7b-chat | 7B | gguf | onnx-webgpu | README |
| internlm2-chat-1_8b | 1.8B | gguf | onnx-webgpu | README |
| internlm2-chat-7b | 7B | gguf | onnx-webgpu | README |
| Nemotron-Mini-4B-Instruct | 4B | gguf | onnx-webgpu | README |
| Nemotron-Cascade-8B-Thinking | 8B | gguf | onnx-webgpu | README |
| SmolLM2-1.7B-Instruct | 1.7B | gguf | onnx-webgpu | README |
| SmolLM2-360M-Instruct | 360M | gguf | onnx-webgpu | README |
| SmolLM2-135M-Instruct | 135M | gguf | onnx-webgpu | README |
| SmolLM-1.7B-Instruct | 1.7B | gguf | onnx-webgpu | README |
| SmolLM-360M-Instruct | 360M | gguf | onnx-webgpu | README |
| SmolLM-135M-Instruct | 135M | gguf | onnx-webgpu | README |
| Yi-Coder-1.5B-Chat | 1.5B | gguf | onnx-webgpu | README |
| Qwen2.5-Coder-0.5B-Instruct | 0.5B | gguf | onnx-webgpu | README |
| Qwen2.5-Coder-1.5B-Instruct | 1.5B | gguf | onnx-webgpu | README |
| Qwen2.5-Coder-7B-Instruct | 7B | gguf | onnx-webgpu | README |
| TinyLlama-1.1B-Chat-v1.0 | 1.1B | gguf | onnx-webgpu | README |
| CodeLlama-7b-Instruct-hf | 7B | gguf | onnx-webgpu | README |
| SOLAR-10.7B-Instruct-v1.0 | 10.7B | gguf | onnx-webgpu | README |
| whisper-tiny | 0.39B | onnx-webgpu | README | |
| gpt-oss-20b | 20B | gguf | onnx-webgpu | README |
| granite-3.1-2b-instruct | 2B | gguf | onnx-webgpu | README |
| granite-3.1-8b-instruct | 8B | gguf | onnx-webgpu | README |
| granite-3.2-2b-instruct | 2B | gguf | onnx-webgpu | README |
| granite-3.2-8b-instruct | 8B | gguf | onnx-webgpu | README |
| granite-3.3-2b-instruct | 2B | gguf | onnx-webgpu | README |
| granite-3.3-8b-instruct | 8B | gguf | onnx-webgpu | README |
| Ministral-8B-Instruct-2410 | 8B | gguf | onnx-webgpu | README |
| Mistral-7B-Instruct-v0.2 | 7B | gguf | onnx-webgpu | README |
| Mistral-7B-Instruct-v0.3 | 7B | gguf | onnx-webgpu | README |
| Mistral-Nemo-Instruct-2407 | 12B | gguf | onnx-webgpu | README |
| Yi-1.5-6B-Chat | 6B | gguf | onnx-webgpu | README |
| Yi-1.5-9B-Chat | 9B | gguf | onnx-webgpu | README |
- Downloads last month
- 492
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support