NeonMaid-12B-v2-q4f32_1-MLC
This is the NeonMaid-12B-v2 model in MLC format q4f32_1
.
The model can be used with MLC-LLM and WebLLM.
Example Usage
Before using the examples, please follow the installation guide.
Chat CLI
mlc_llm chat HF://JackBinary/NeonMaid-12B-v2-q4f32_1-MLC
REST Server
mlc_llm serve HF://JackBinary/NeonMaid-12B-v2-q4f32_1-MLC
Python API
from mlc_llm import MLCEngine
model = "HF://JackBinary/NeonMaid-12B-v2-q4f32_1-MLC"
engine = MLCEngine(model)
for response in engine.chat.completions.create(
messages=[{"role": "user", "content": "What is the meaning of life?"}],
model=model,
stream=True,
):
for choice in response.choices:
print(choice.delta.content, end="", flush=True)
print("\n")
engine.terminate()
Documentation
For more on MLC LLM, visit the documentation and GitHub repo.
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for JackBinary/NeonMaid-12B-v2-q4f32_1-MLC
Base model
yamatazen/NeonMaid-12B-v2