Prepare Python environment to download Hugging Face models

This guide will walk you through setting up a Python environment named empower, installing the necessary Hugging Face packages, and downloading and using the microsoft/Phi-4-mini-instruct model on a GNU/Linux system.

Follow this guide, or return to the main page if needed.


1. Set Up Python Environment

Check Python Installation

Ensure Python 3 is installed by running:

bash python3 --version

If Python is not installed, install it using your package manager. For example:

Create a Virtual Environment

Create a virtual environment named empower:

bash python3 -m venv empower

Activate the virtual environment: bash source empower/bin/activate


2. Install Hugging Face Packages

Install the necessary Hugging Face packages to interact with models and the Hugging Face Hub.

Install transformers and huggingface_hub

Run the following command to install both packages:

bash pip install transformers huggingface_hub

Verify Installation

Check if the packages are installed correctly:

bash python3 -c "from transformers import pipeline; print('Transformers installed successfully!')" python3 -c "from huggingface_hub import HfApi; print('Hugging Face Hub installed successfully!')"


3. Download the microsoft/Phi-4-mini-instruct Model

To download and use the microsoft/Phi-4-mini-instruct model, follow these steps.

Using huggingface-cli to Download the Model

Run the following command to download the model:

bash huggingface-cli download microsoft/Phi-4-mini-instruct

This will download the model files to your current directory.


4. Load and Use the Model in Python

Once the model is downloaded, you can load and use it in your Python code.

Example Code

```python from transformers import AutoModelForCausalLM, AutoTokenizer

Load the model and tokenizer

model_name = “microsoft/Phi-4-mini-instruct” tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name)

Generate text

input_text = “What is the capital of France?” inputs = tokenizer(input_text, return_tensors=“pt”) outputs = model.generate(**inputs, max_length=50)

Decode and print the output

print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```


5. Summary of Commands

Command Description
python3 -m venv empower Create a virtual environment named empower.
source empower/bin/activate Activate the empower environment.
pip install transformers huggingface_hub Install Hugging Face packages.
huggingface-cli download microsoft/Phi-4-mini-instruct Download the model.

Now you’re ready to use the microsoft/Phi-4-mini-instruct model in your Python projects on GNU/Linux! 🚀

Proceed to next step

Proceed to next step.