--- library_name: mlc-llm base_model: meta-llama/Meta-Llama-3.1-8B-Instruct tags: - mlc-llm - web-llm - llama-3.1 - instruct - q4f16_1 --- # ReelevateLM-q4f16 This is the [Meta Llama 3.1 Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) model fine‑tuned with LoRA and converted to MLC format `q4f16_1`. The model can be used in: * [MLC-LLM](https://github.com/mlc-ai/mlc-llm) * [WebLLM](https://github.com/mlc-ai/web-llm) ## Example Usage Before running any examples, install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages). ### Chat (CLI) ```bash mlc_llm chat HF://pr0methium/ReelevateLM-q4f16_1 ``` ### REST Server ```bash mlc_llm serve HF://pr0methium/ReelevateLM-q4f16_1 ``` ### Python API ```python from mlc_llm import MLCEngine model = "HF://pr0methium/ReelevateLM-q4f16_1" engine = MLCEngine(model) for response in engine.chat.completions.create( messages=[{"role": "user", "content": "Write me a 30 second reel story…"}], model=model, stream=True, ): for choice in response.choices: print(choice.delta.content, end="", flush=True) print() engine.terminate() ``` ## Documentation For more information on the MLC LLM project, please visit the [docs](https://llm.mlc.ai/docs/) and the [GitHub repo](https://github.com/mlc-ai/mlc-llm).