HUGS on Google Cloud
The Hugging Face Generative AI Services, also known as HUGS, can be deployed in Google Cloud via the Google Cloud Marketplace offering.
This collaboration brings Hugging Face’s extensive library of pre-trained models and their Text Generation Inference (TGI) solution to Google Cloud customers, enabling seamless integration of state-of-the-art Large Language Models (LLMs) within the Google Cloud infrastructure.
HUGS provides access to a hand-picked and manually benchmarked collection of the most performant and latest open LLMs hosted in the Hugging Face Hub to TGI-optimized container applications, allowing users to deploy third-party Kubernetes applications on AWS or on-premises environments.
With HUGS, developers can easily find, subscribe to, and deploy Hugging Face models using AWS infrastructure, leveraging the power of NVIDIA GPUs on optimized, zero-configuration TGI containers.
Subscribe to HUGS on AWS Marketplace
Subscribe to the product in Google Cloud by following the instructions on the page. At the time of writing (October 2024), the steps are to:
- Click
Purchase
, then go to the next page. - Configure the order by selecting the right plan, billing account, and confirming the terms. Then click
Subscribe
.
- Click
You should see a “Your order request has been sent to Hugging Face” message. With a button “Go to Product Page”. Click on it.
To know whether you are subscribed or not, you can either see if the “Purchase” button or “Configure” button is enabled on the product page, meaning that either you or someone else from your organization has already requested access for your account.
Deploy HUGS on Google Cloud GKE
This example showcases how to deploy a HUGS container and model on Google Cloud GKE.
This example assumes that you have an Google Cloud Account, that you have installed and setup the Google Cloud CLI, and that you are logged in into your account with the necessary permissions to subscribe to offerings in the Google Cloud Marketplace, and create and manage IAM permissions and resources such as Google Kubernetes Engine (GKE).
When deploying HUGS on Google Cloud through the UI you can either select an existing GKE cluster or create a new one. If you want to create a new one, you can follow the instructions here. Additionally you need to define:
Namespace: The namespace to deploy the HUGS container and model.
App Instance Name: The name of the HUGS container.
Hugs Model Id: Select the model you want to deploy from the Hugging Face Hub. You can find all supported model here
GPU Number: The number of GPUs you have available and want to use for the deployment, make sure to check the supported model matrix to know which model requires GPUs.
GPU Type: The type of GPU you have available inside your GKE cluster.
Reporting Service Account: The service account to use for reporting.
Next you click on Deploy
and wait for the deployment to finish. This takes around 10-15 minutes.
If you want to better understand the different deployment options you have, e.g. 1x NVIDIA L4 GPU for Meta Llama 3.1 8B Instruct, you can checkout the supported model matrix.
Send request to the HUGS application
Every HUGS application includes instructions on how to retrieve the Ingress IP address and port to send requests to the application. A HUGS deployment is a deployment of a HELM chart that includes our model container, marketplace agent (sidecar), a volume and a ingress load balancer to make the application accessible from outside the cluster.
Alternatively, you can also use the Messages API via openai
. Learn more about inference here.
Create a GPU GKE Cluster for HUGS
To deploy HUGS on Google Cloud, you’ll need a GKE cluster with GPU support. Here’s a step-by-step guide to create one:
Ensure you have the Google Cloud CLI installed and configured.
Set up environment variables for your cluster configuration:
export PROJECT_ID="your-project-id" # Your Google Cloud Project ID which is subscribed to HUGS
export CLUSTER_NAME="hugs-cluster" # The name of the GKE cluster
export LOCATION="us-central1" # The location of the GKE cluster
export MACHINE_TYPE="g2-standard-12" # The machine type of the GKE cluster
export GPU_TYPE="nvidia-l4" # The type of GPU to use
export GPU_COUNT=1 # The number of GPUs to use
- Create the GKE cluster:
gcloud container clusters create $CLUSTER_NAME \
--project=$PROJECT_ID \
--zone=$LOCATION \
--release-channel=stable \
--cluster-version=1.29 \
--machine-type=$MACHINE_TYPE \
--num-nodes=1 \
--no-enable-autoprovisioning
- Add a GPU node pool to the cluster:
gcloud container node-pools create gpu-pool \
--cluster=$CLUSTER_NAME \
--zone=$LOCATION \
--machine-type=$MACHINE_TYPE \
--accelerator type=$GPU_TYPE,count=$GPU_COUNT,gpu-driver-version=default \
--num-nodes=1 \
--enable-autoscaling \
--min-nodes=1 \
--max-nodes=1 \
--spot \
--disk-type=pd-ssd \
--disk-size=100GB
- Configure kubectl to use the new cluster:
gcloud container clusters get-credentials $CLUSTER_NAME --zone=$LOCATION
Your GKE cluster with GPU support is now ready for HUGS deployment. You can proceed to deploy HUGS using the Google Cloud Marketplace as described in the previous section.
For more detailed information on creating and managing GKE clusters, refer to the official Google Kubernetes Engine documentation or run GPUs in GKE Standard node pools.